Archives For merger guidelines

Economists have long recognized that innovation is key to economic growth and vibrant competition. As an Organisation for Economic Co-operation and Development (OECD) report on innovation and growth explains, “innovative activity is the main driver of economic progress and well-being as well as a potential factor in meeting global challenges in domains such as the environment and health. . . . [I]nnovation performance is a crucial determinant of competitiveness and national progress.”

It follows that an economically rational antitrust policy should be highly attentive to innovation concerns. In a December 2020 OECD paper, David Teece and Nicolas Petit caution that antitrust today is “missing broad spectrum competition that delivers innovation, which in turn is the main driver of long term growth in capitalist economies.” Thus, the authors stress that “[i]t is about time to put substance behind economists’ and lawyers’ long time admonition to inject more dynamism in our analysis of competition. An antitrust renaissance, not a revolution, is long overdue.”

Accordingly, before the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) finalize their new draft merger guidelines, they would be well-advised to take heed of new research that “there is an important connection between merger activity and innovation.” This connection is described in a provocative new NERA Economic Consulting paper by Robert Kulick and Andrew Card titled “Mergers, Industries, and Innovation: Evidence from R&D Expenditures and Patent Applications.” As the executive summary explains (citation deleted):

For decades, there has been a broad consensus among policymakers, antitrust enforcers, and economists that most mergers pose little threat from an antitrust perspective and that mergers are generally procompetitive. However, over the past year, leadership at the FTC and DOJ has questioned whether mergers are, as a general matter, economically beneficial and asserted that mergers pose an active threat to innovation. The Agencies have also set the stage for a substantial increase in the scope of merger enforcement by focusing on new theories of anticompetitive harm such as elimination of potential competition from nascent competitors and the potential for cumulative anticompetitive harm from serial acquisitions. Despite the importance of the question of whether mergers have a positive or negative effect on industry-level innovation, there is very little empirical research on the subject. Thus, in this study, we investigate this question utilizing, what is to our knowledge, a never before used dataset combining industry-level merger data from the FTC/DOJ annual HSR reports with industry-level data from the NSF on R&D expenditure and patent applications. We find a strong positive and statistically significant relationship between merger activity and industry-level innovative activity. Over a three- to four-year cycle, a given merger is associated with an average increase in industry-level R&D expenditure of between $299 million and $436 million in R&D intensive industries. Extrapolating our results to the industry level implies that, on average, mergers are associated with an increase in R&D expenditure of between $9.27 billion and $13.52 billion per year in R&D intensive industries and an increase of between 1,430 and 3,035 utility patent applications per year. Furthermore, using a statistical technique developed by Nobel Laureate Clive Granger, we find that the direction of causality goes, to a substantial extent, directly from merger activity to increased R&D expenditure and patent applications. Based on these findings we draw the following key conclusions:

  • There is no evidence that mergers are generally associated with reduced innovation, nor do the results indicate that supposedly lax antitrust enforcement over the period from 2008 to 2020 diminished innovative activity. Indeed, R&D expenditure and patent applications increased substantially over the period studied, and this increase was directly linked to increases in merger activity.
  • In previous research, we found that “trends in industrial concentration do not provide a reliable basis for making inferences about the competitive effects of a proposed merger” as “trends in concentration may simply reflect temporary fluctuations which have no broader economic significance” or are “often a sign of increasing rather than decreasing market competition.” This study presents further evidence that previous consolidation in an industry or a “trend toward concentration” may reflect procompetitive responses to competitive pressures, and therefore should not play a role in merger review beyond that already embodied in the market-level concentration screens considered by the Agencies.
  • The Agencies should proceed cautiously in pursuing novel theories of anticompetitive harm; our findings are consistent with the prevailing consensus from the previous decades that there is an important connection between merger activity and innovation, and thus, a broad “anti-merger” policy, particularly one pursued in the absence of strong empirical evidence, has the potential to do serious harm by perversely inhibiting innovative activity.
  • Due to the link between mergers and innovative activity in R&D intensive industries where the potential for anticompetitive consequences can be resolved through remedies, relying on remedies rather than blocking transactions outright may encourage innovation while protecting consumers where there are legitimate competitive concerns about a particular transaction.
  • The potential for mergers to create procompetitive benefits should be taken seriously by policymakers, antitrust enforcers, courts, and academics and the Agencies should actively study the potential benefits, in addition to the costs, of mergers.

In short, the Kulick & Card paper lends valuable empirical support for an economics-based approach to merger analysis that fully takes into account innovation concerns. If the FTC and DOJ truly care about strengthening the American economy (consistent with “President Biden’s stated goals of renewing U.S. innovation and global competitiveness”—see, e.g., here and here), they should take heed in crafting new merger guidelines. An emphasis in the guidelines on avoiding interference with merger-related innovation (taking into account research by such scholars as Kulick, Card, Teece, and Petit) would demonstrate that the antitrust agencies are fully behind President Joe Biden’s plans to promote an innovative economy.

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.

Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.

Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.

The RFI Reflects the False Premise that Competition is Declining in the United States

The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:

Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.

This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:

Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.

Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002. 

Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.

In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.

The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship

The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)

What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”

Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).

The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.

For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”

The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.

Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”

Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)

The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”

This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.

By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).

These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.

New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI

The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.

As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):

[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.

It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.

Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”

Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.

One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.

In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.

New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies

The DOJ and FTC could, of course, acknowledge the problem of administrability  and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.

As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.

In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”

New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.

Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.

Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):

When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.

Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.

New Guidelines, if Issued, Should Give Due Credit to Efficiencies

Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.

Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.

Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)

Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.

Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”

She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.

The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.

Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.

Conclusion

If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:

  1. Eschew references to dated and discredited case law;
  2. Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
  3. Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
  4. Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).

Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.

Large group of people in the shape of two puzzle pieces on a white background.

The Federal Trade Commission (FTC) has taken another step away from case-specific evaluation of proposed mergers and toward an ex ante regulatory approach in its Oct. 25 “Statement of the Commission on Use of Prior Approval Provisions in Merger Orders.” Though not unexpected, this unfortunate initiative once again manifests the current FTC leadership’s disdain for long-accepted economically sound antitrust-enforcement principles.

Discussion

High levels of merger activity should, generally speaking, be viewed as a symptom of a vibrant economy, not a reason for economic concern. Horizontal mergers typically are driven by the potential to realize real cost savings, unrelated to anticompetitive reductions in output.

Non-horizontal mergers often put into force welfare-enhancing reductions of double marginalization, while uniting complements and achieving synergies in ways that seek efficiencies. More generally, proposed acquisitions frequently reflect an active market for corporate control that seeks to reallocate scarce resources to higher-valued uses (see, for example, Henry Manne’s seminal article on “Mergers and the Market for Corporate Control”). Finally, by facilitating cost reductions, synergies, and improvements in resource allocations within firms, mergers may allow the new consolidated entity to compete more effectively in the marketplace, thereby enhancing competition.

Given the economic benefits frequently generated by mergers, government antitrust enforcers should not discourage them, nor should they intervene to block them, absent a strong showing that a particular transaction would likely reduce competition and harm consumer welfare. In the United States, the Hart-Scott-Rodino Premerger Notification Act of 1976 (HSR) and its implementing regulations generally have reflected this understanding. They have done this by requiring that proposed transactions above a certain size threshold be notified to the FTC and the U.S. Justice Department (DOJ), and by providing a framework for timely review, allowing most notified mergers to close promptly.

In the relatively few cases where agency enforcement staff have identified competitive problems, the HSR framework usually has enabled timely negotiation of possible competitive fixes (divestitures and, less typically, behavioral remedies). Where fixes have not been feasible, filing parties generally have been able to decide whether to drop a transaction or prepare for litigation within a reasonable time period. Under the HSR framework, enforcers generally have respected the time sensitivity of merger proposals and acted expeditiously (with a few exceptions) to review complicated and competitively sensitive transactions. The vast majority of HSR filings that facially raise no plausible competitive issues historically have been dealt with swiftly—often through “early termination” policies that provide the merging parties an antitrust go-ahead well before the end of HSR’s initial 30-day review period.

In short, although far from perfect, HSR processes have sought to minimize regulatory impediments to merger activity, consistent with the statutory mandate to identify and prevent anticompetitive mergers.      

Regrettably, under the leadership of Chair Lina M. Khan, the FTC has taken unprecedented steps to undermine the well-understood HSR framework. As I wrote recently:

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and

2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

The FTC’s merger-review reign of error continues. Most recently, it released a policy guidance statement that effectively transforms the commission into a merger regulator whose assent is required for a specific category of mergers. This policy is at odds with HSR, which is designed to facilitate merger reviews, not to serve as a regulatory-approval mechanism. As the FTC explains in its Oct. 25 statement(citation to 1995 Statement omitted) (approved by a 3-2 vote, with Commissioners Noah Joshua Phillips and Christine S. Wilson dissenting):

On July 21, 2021, the Commission voted to rescind the 1995 Policy Statement on Prior Approval and Prior Notice Provisions (“1995 Statement”). The 1995 Statement ended the Commission’s then-longstanding practice of incorporating prior approval and prior notice provisions in Commission orders addressing mergers. With the rescission of the 1995 statement, the Commission returns now to its prior practice of routinely requiring merging parties subject to a Commission order to obtain prior approval from the FTC before closing any future transaction affecting each relevant market for which a violation was alleged. . . .

In addition, from now on, in matters where the Commission issues a complaint to block a merger and the parties subsequently abandon the transaction, the agency will engage in a case-specific determination as to whether to pursue a prior approval order, focusing on the factors identified below with respect to use of broader prior approval provisions. The fact that parties may abandon a merger after litigation commences does not guarantee that the

Commission will not subsequently pursue an order incorporating a prior approval provision. . . .
In some situations where stronger relief is needed, the Commission may decide to seek a prior approval provision that covers product and geographic markets beyond just the relevant product and geographic markets affected by the merger. No single factor is dispositive; rather, the Commission will take a holistic view of the circumstances when determining the length and breadth of prior approval provisions. [Six factors listed include the nature of the transaction; the level of market concentration; the degree to which the transaction increases concentration; the degree to which one of the parties pre-merger likely had market power; the parties’ history of acquisitiveness; and evidence of anticompetitive market dynamics.]

The Oct. 25 Statement is highly problematic in several respects. Its oversight requirements may discourage highly effective consent decree “fixes” of potential mergers, leading to wasteful litigation—or, alternatively, the abandonment of efficient transactions. What’s more, the threat of FTC prior approval orders (based on multiple criteria subject to manipulation by the FTC), even when parties abandon a proposed transaction (and thus, effectively have “done nothing”), smacks of unwarranted regulation of future corporate plans of disfavored firms, raising questions of fundamental fairness.

All told, the new requirements, combined with the FTC’s policies to end early terminations and to stop “greenlighting” routine merger transactions after a 30-day review, are yet signs that the well-understood HSR consensus has been unilaterally abandoned by the FTC, based on purely partisan commission votes, despite the lack of any public consultation. The FTC’s abrupt and arbitrary merger-review-related actions will harm the economy by discouraging welfare-promoting consolidations. These actions also fly in the face of sound public administration.  

Conclusion

The FTC continues to move from its historic role of antitrust enforcer to that of antitrust regulator at warp speed, based on a series of 3-2 votes. In particular, the commission’s abandonment of a well-established bipartisan approach to HSR policy is particularly troublesome, given the new risks it creates for private parties considering acquisitions. These new risks will likely deter an unknown number of efficiency-enhancing, innovative combinations that could have benefited consumers and substantially strengthened the American economy.

Perhaps the imminent confirmation of Jonathan Kanter—an individual with many years of practical experience as a leading antitrust practitioner—to be assistant attorney general for antitrust will bring a more reasonable perspective to antitrust agency HSR policies. It may even convince a majority of the commission to return to the bipartisan HSR merger-review framework that has served the American economy well.

If not, perhaps congressional overseers might wish to investigate the implications for the American innovation economy and the rule of law stemming from the FTC’s de facto abandonment of HSR principles. Whether to fundamentally alter merger-review procedures should be up to Congress, not to three unelected officials.    

Federal Trade Commission (FTC) Chair Lina Khan’s Sept. 22 memorandum to FTC commissioners and staff—entitled “Vision and Priorities for the FTC” (VP Memo)—offers valuable insights into the chair’s strategy and policy agenda for the commission. Unfortunately, it lacks an appreciation for the limits of antitrust and consumer-protection law; it also would have benefited from greater regulatory humility. After summarizing the VP Memo’s key sections, I set forth four key takeaways from this rather unusual missive.

Introduction

The VP Memo begins appropriately enough, with praise for commission staff and a call to focus on key FTC strategic priorities and operational objectives. So far, so good. Regrettably, the introductory section is the memo’s strongest feature.

Strategic Approach

The VP Memo’s first substantive section, which lays out Khan’s strategic approach, raises questions that require further clarification.

This section is long on glittering generalities. First, it begins with the need to take a “holistic approach” that recognizes law violations harm workers and independent businesses, as well as consumers. Legal violations that reflect “power asymmetries” and harm to “marginalized communities” are emphasized, but not defined. Are new enforcement standards to supplement or displace consumer welfare enhancement being proposed?

Second, similar ambiguity surrounds the need to target enforcement efforts toward “root causes” of unlawful conduct, rather than “one-off effects.” Root causes are said to involve “structural incentives that enable unlawful conduct” (such as conflicts of interest, business models, or structural dominance), as well as “upstream” examination of firms that profit from such conduct. How these observations may be “operationalized” into case-selection criteria (and why these observations are superior to alternative means for spotting illegal behavior) is left unexplained.

Third, the section endorses a more “rigorous and empiricism-driven approach” to the FTC’s work, a “more interdisciplinary approach” that incorporates “a greater range of analytical tools and skillsets.” This recommendation is not problematic on its face, though it is a bit puzzling. The FTC already relies heavily on economics and empirical work, as well as input from technologists, advertising specialists, and other subject matter experts, as required. What other skillsets are being endorsed? (A more far-reaching application of economic thinking in certain consumer-protection cases would be helpful, but one suspects that is not the point of the paragraph.)

Fourth, the need to be especially attentive to next-generation technologies, innovations, and nascent industries is trumpeted. Fine, but the FTC already does that in its competition and consumer-protection investigations.

Finally, the need to “democratize” the agency is highlighted, to keep the FTC in tune with “the real problems that Americans are facing in their daily lives and using that understanding to inform our work.” This statement seems to imply that the FTC is not adequately dealing with “real problems.” The FTC, however, has not been designated by Congress to be a general-purpose problem solver. Rather, the agency has a specific statutory remit to combat anticompetitive activity and unfair acts or practices that harm consumers. Ironically, under Chair Khan, the FTC has abruptly implemented major changes in key areas (including rulemaking, the withdrawal of guidance, and merger-review practices) without prior public input or consultation among the commissioners (see, for example, here)—actions that could be deemed undemocratic.

Policy Priorities

The memo’s brief discussion of Khan’s policy priorities raises three significant concerns.

First, Khan stresses the “need to address rampant consolidation and the dominance that it has enabled across markets” in the areas of merger enforcement and dominant-firm scrutiny. The claim that competition has substantially diminished has been critiqued by leading economists, and is dubious at best (see, for example, here). This flat assertion is jarring, and in tension with the earlier call for more empirical analysis. Khan’s call for revision of the merger guidelines (presumably both horizontal and vertical), in tandem with the U.S. Justice Department (DOJ), will be headed for trouble if it departs from the economic reasoning that has informed prior revisions of those guidelines. (The memo’s critical and cryptic reference to the “narrow and outdated framework” of recent guidelines provides no clue as to the new guidelines format that Chair Khan might deem acceptable.) 

Second, the chair supports prioritizing “dominant intermediaries” and “extractive business models,” while raising concerns about “private equity and other investment vehicles” that “strip productive capacity” and “target marginalized communities.” No explanation is given as to why such prioritization will best utilize the FTC’s scarce resources to root out harmful anticompetitive behavior and consumer-protection harms. By assuming from the outset that certain “unsavory actors” merit prioritization, this discussion also is in tension with an empirical approach that dispassionately examines the facts in determining how resources should best be allocated to maximize the benefits of enforcement.

Third, the chair wants to direct special attention to “one-sided contract provisions” that place “[c]onsumers, workers, franchisees, and other market participants … at a significant disadvantage.” Non-competes, repair restrictions, and exclusionary clauses are mentioned as examples. What is missing is a realistic acknowledgement of the legal complications that would be involved in challenging such provisions, and a recognition of possible welfare benefits that such restraints could generate under many circumstances. In that vein, mere perceived inequalities in bargaining power alluded to in the discussion do not, in and of themselves, constitute antitrust or consumer-protection violations.

Operational Objectives

The closing section, on “operational objectives,” is not particularly troublesome. It supports an “integrated approach” to enforcement and policy tools, and endorses “breaking down silos” between competition (BC) and consumer-protection (BCP) staff. (Of course, while greater coordination between BC and BCP occasionally may be desirable, competition and consumer-protection cases will continue to feature significant subject matter and legal differences.) It also calls for greater diversity in recruitment and a greater staffing emphasis on regional offices. Finally, it endorses bringing in more experts from “outside disciplines” and more rigorous analysis of conduct, remedies, and market studies. These points, although not controversial, do not directly come to grip with questions of optimal resource allocation within the agency, which the FTC will have to address.

Evaluating the VP Memo: 4 Key Takeaways

The VP Memo is a highly aggressive call-to-arms that embodies Chair Khan’s full-blown progressive vision for the FTC. There are four key takeaways:

  1. Promoting the consumer interest, which for decades has been the overarching principle in both FTC antitrust and consumer-protection cases (which address different sources of consumer harm), is passé. Protecting consumers is only referred to in passing. Rather, the concerns of workers, “honest businesses,” and “marginalized communities” are emphasized. Courts will, however, continue to focus on established consumer-welfare and consumer-harm principles in ruling on antitrust and consumer-protection cases. If the FTC hopes to have any success in winning future cases based on novel forms of harm, it will have to ensure that its new case-selection criteria also emphasize behavior that harms consumers.
  2. Despite multiple references to empiricism and analytical rigor, the VP Memo ignores the potential economic-welfare benefits of the categories of behavior it singles out for condemnation. The memo’s critiques of “middlemen,” “gatekeepers,” “extractive business models,” “private equity,” and various types of vertical contracts, reference conduct that frequently promotes efficiency, generating welfare benefits for producers and consumers. Even if FTC lawsuits or regulations directed at these practices fail, the business uncertainty generated by the critiques could well disincentivize efficient forms of conduct that spark innovation and economic growth.
  3. The VP Memo in effect calls for new enforcement initiatives that challenge conduct different in nature from FTC cases brought in recent decades. This implicit support for lawsuits that would go well beyond existing judicial interpretations of the FTC’s competition and consumer-protection authority reflects unwarranted hubris. This April, in the AMG case, the U.S. Supreme Court unanimously rejected the FTC’s argument that it had implicit authority to obtain monetary relief under Section 13(b) of the FTC Act, which authorizes permanent injunctions – despite the fact that several appellate courts had found such authority existed. The Court stated that the FTC could go to Congress if it wanted broader authority. This decision bodes ill for any future FTC efforts to expand its authority into new realms of “unfair” activity through “creative” lawyering.
  4. Chair Khan’s unilateral statement of her policy priorities embodied in the VP Memo bespeaks a lack of humility. It ignores a long history of consensus FTC statements on agency priorities, reflected in numerous commission submissions to congressional committees in connection with oversight hearings. Although commissioners have disagreed on specific policy statements or enforcement complaints, general “big picture” policy statements to congressional overseers typically have been by unanimous vote. By ignoring this tradition, the VP Memo departs from a longstanding bipartisan tradition that will tend to undermine the FTC’s image as a serious deliberative body that seeks to reconcile varying viewpoints (while recognizing that, at times, different positions will be expressed on particular matters). If the FTC acts more and more like a one-person executive agency, why does it need to be “independent,” and, indeed, what special purpose does it serve as a second voice on federal antitrust matters? Under seeming unilateral rule, the prestige of the FTC before federal courts may suffer, undermining its effectiveness in defending enforcement actions and promulgating rules. This will particularly be the case if more and more FTC decisions are taken by a 3-2 vote and appear to reflect little or no consultation with minority commissioners.

Conclusion

The VP Memo reflects a lack of humility and strategic insight. It sets forth priorities that are disconnected from the traditional core of the FTC’s consumer-welfare-centric mission. It emphasizes new sorts of initiatives that are likely to “crash and burn” in the courts, unless they are better anchored to established case law and FTC enforcement principles. As a unilateral missive announcing an unprecedented change in policy direction, the memo also undermines the tradition of collegiality and reasoned debate that generally has characterized the commission’s activities in recent decades.

As such, the memo will undercut, not advance, the effectiveness of FTC advocacy before the courts. It will also undermine the FTC’s reputation as a truly independent deliberative body. Accordingly, one may hope that Chair Khan will rethink her approach, withdraw the VP Memo, and work with all of her fellow commissioners to recraft a new consensus policy document.   

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET

Moderator:

  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP

Speakers:

  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

FTC Commissioner Josh Wright pens an incredibly important dissent in the FTC’s recent Ardagh/Saint-Gobain merger review.

At issue is how pro-competitive efficiencies should be considered by the agency under the Merger Guidelines.

As Josh notes, the core problem is the burden of proof:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.

In the summer of 1995, I spent a few weeks at the FTC. It was the end of the summer and nearly the entire office was on vacation, so I was left dealing with the most arduous tasks. In addition to fielding calls from Joe Sims prodding the agency to finish the Turner/Time Warner merger consent, I also worked on early drafting of the efficiencies defense, which was eventually incorporated into the 1997 Merger Guidelines revision.

The efficiencies defense was added to the Guidelines specifically to correct a defect of the pre-1997 Guidelines era in which

It is unlikely that efficiencies were recognized as an antitrust defense…. Even if efficiencies were thought to have a significant impact on the outcome of the case, the 1984 Guidelines stated that the defense should be based on “clear and convincing” evidence. Appeals Court Judge and former Assistant Attorney General for Antitrust Ginsburg has recently called reaching this standard “well-nigh impossible.” Further, even if defendants can meet this level of proof, only efficiencies in the relevant anticompetitive market may count.

The clear intention was to ensure better outcomes by ensuring that net pro-competitive mergers wouldn’t be thwarted. But even under the 1997 (and still under the 2010) Guidelines,

the merging firms must substantiate efficiency claims so that the Agency can verify by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved (and any costs of doing so), how each would enhance the merged firm’s ability and incentive to compete, and why each would be merger-specific. Efficiency claims will not be considered if they are vague or speculative or otherwise cannot be verified by reasonable means.

The 2006 Guidelines Commentary further supports the notion that the parties bear a substantial burden of demonstrating efficiencies.

As Josh notes, however:

Efficiencies, like anticompetitive effects, cannot and should not be presumed into existence. However, symmetrical treatment in both theory and practice of evidence proffered to discharge the respective burdens of proof facing the agencies and merging parties is necessary for consumer‐welfare based merger policy

There is no economic basis for demanding more proof of claimed efficiencies than of claimed anticompetitive harms. And the Guidelines since 1997 were (ostensibly) drafted in part precisely to ensure that efficiencies were appropriately considered by the agencies (and the courts) in their enforcement decisions.

But as Josh notes, this has not really been the case, much to the detriment of consumer-welfare-enhancing merger review:

To the extent the Merger Guidelines are interpreted or applied to impose asymmetric burdens upon the agencies and parties to establish anticompetitive effects and efficiencies, respectively, such interpretations do not make economic sense and are inconsistent with a merger policy designed to promote consumer welfare. Application of a more symmetric standard is unlikely to allow, as the Commission alludes to, the efficiencies defense to “swallow the whole of Section 7 of the Clayton Act.” A cursory read of the cases is sufficient to put to rest any concerns that the efficiencies defense is a mortal threat to agency activity under the Clayton Act. The much more pressing concern at present is whether application of asymmetric burdens of proof in merger review will swallow the efficiencies defense.

It benefits consumers to permit mergers that offer efficiencies that offset presumed anticompetitive effects. To the extent that the agencies, as in the Ardagh/Saint-Gobain merger, discount efficiencies evidence relative to their treatment of anticompetitive effects evidence, consumers will be harmed and the agencies will fail to fulfill their mandate.

This is an enormously significant issue, and Josh should be widely commended for raising it in this case. With luck it will spur a broader discussion and, someday, a more appropriate treatment in the Guidelines and by the agencies of merger efficiencies.

 

Do the 2010 Horizontal Merger Guidelines require market definition?  Will the agencies define markets in cases they bring?  Are they required to do so by the Guidelines?  By the Clayton Act?

Here is Commissioner Rosch in the FTC Annual Report (p.18):

“A significant development in 2010 was the issuance of updated Horizontal Merger Guidelines by the federal antitrust agencies. The 2010 Guidelines advance merger analysis by eliminating the need to define a relevant market and determine industry concentration at the outset.”

Compare with Commissioner Rosch’s reported remarks at the Spring Meeting:

“I want to emphasise: I don’t care what the 2010 guidelines say, you can never do away with market definition,” Rosch said.

Does the latter statement assert that the HMGs do not require market definition at all?  If so, the statement in the former that the agencies don’t need to do it first certainly follows.  And why is it an “advance” to eliminate the need to define a market first but seems to be a bad thing to eliminate it altogether in certain cases?   Of course, as DOJ (and, importantly, UCLA Bruin) economist Ken Heyer points on in his remarks at the same Spring Meeting event, and I’ve written about here and here, most expect the agencies to continue defining markets because federal courts expect it, may require it, and failure to do so will harm the agencies’ ability to successfully bring enforcement actions.  Nonetheless, the statements do not provide much clarity on the Commissioner’s (or, for that matter, Commission’s) views with respect to the new HMGs and the role of market definition.

Over at the DOJ, on the other hand, former Chief Economist Carl Shapiro — congratulations to the newly appointed Fiona Scott Morton —  made clear that agencies’ stance on the role of market definition:

“The Division recognizes the necessity of defining a relevant market as part of any merger challenge we bring.”

No such announcement from the FTC.   And Commissioner Rosch’s remarks do not clarify matters.  On the one hand they seem to indicate the FTC will always define markets; on the other, they imply that they do so despite the fact that the Guidelines say they don’t have to.  With Shapiro gone, the DOJ view is unclear at the moment.  Perhaps all of this is much ado about nothing as a practical matter — though I’m not sure of that.  But if the Agencies both consider market definition a “necessity,” why not just say so?  Why not write: “market definition is required by Section 7 of the Clayton Act and the agencies will, at some point in the analysis, define a relevant market”?

Market definition requirement aside, my views on the positive developments in the new Merger Guidelines and the larger problem they present — asymmetrically updating theories of competitive harm without doing so on the efficiencies side — articulated in this forthcoming paper.

Merger Retrospective

Steve Salop —  4 April 2011

Several years ago, the DOJ cleared a merger between Whirlpool and Maytag.   The primary defense was that post-merger prices could not rise because of intense competition from foreign competitors like LG and Samsung. Apparently the actual competition was more than Whirlpool wanted to bear.  Guess What?  Mr. Laissez-Faire Antitrust, meet Dr. Public Choice.  The Wall Street Journal has reported that Whirlpool has filed a dumping complaint against LG and Samsung.   Whirlpool’s dumping complaint involves refrigerators and the merger concerns involved washers and dryers more than refrigerators.  But, the complaint sends a signal to LG and Samsung.  The comlaint also certainly does raise a caution about relying on foreign competition, and suggests a potential remedial provision.

Smoothing Demand Kinks

Steve Salop —  4 April 2011

One criticism of the unilateral effects analysis in the 2010 Merger Guidelines is that demand curves are kinked at the current price.  A small increase in price will dramatically reduce the quantity demanded.  One rationale for the kink is that people over-react to small price changes and dramatically reduce demand.  As a result of this behavioral economics deviation from standard rational behavior, it is claimed, merging firms will not raise prices when the merger increases the opportunity cost of increasing output.  (The opportunity cost increases because some of the increased output now comes from the new merger partner.)  It has been argued that such kinks are ubiquitous, whatever the current price is.  For some recent views on this issue, see the recent anti-kink article by Werden and the pro-kink reply by Scheffman and Simons.

A story in today’s New York Times nicely illustrates one of the problems with the kinked demand story.  Instead of raising prices, consumer products firms can and commonly do raise per unit prices by reducing package sizes.  Changes in package sizes do not create a disproportionate reaction, perhaps because they are less visible to busy shoppers.   Whatever the reason, this smaller package size raises the effective price per unit while avoiding the behavioral economics kink.  Of course, this is not to say that firms never raise prices; they do.  Moreover, even a kink did exist for reasons grounded in behavioral economics or menu costs, any kink likely is just temporary.  In contrast, a merger is permanent.

It is for these reasons that this kinked economics has gotten much traction in the current debate.  But, these presumptions do not mean that kinked economics arguments can never be raised in a merger.  If there were evidence of a low pass-through rate of variable cost into higher prices over a significant period of time, that evidence would be relevant to a more refined analysis of upward pricing pressure.