Archives For clearance

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The Federal Trade Commission and 46 state attorneys general (along with the District of Columbia and the Territory of Guam) filed their long-awaited complaints against Facebook Dec. 9. The crux of the arguments in both lawsuits is that Facebook pursued a series of acquisitions over the past decade that aimed to cement its prominent position in the “personal social media networking” market. 

Make no mistake, if successfully prosecuted, these cases would represent one of the most fundamental shifts in antitrust law since passage of the Hart-Scott-Rodino Act in 1976. That law required antitrust authorities to be notified of proposed mergers and acquisitions that exceed certain value thresholds, essentially shifting the paradigm for merger enforcement from ex-post to ex-ante review.

While the prevailing paradigm does not explicitly preclude antitrust enforcers from taking a second bite of the apple via ex-post enforcement, it has created an assumption among that regulatory clearance of a merger makes subsequent antitrust proceedings extremely unlikely. 

Indeed, the very point of ex-ante merger regulations is that ex-post enforcement, notably in the form of breakups, has tremendous social costs. It can scupper economies of scale and network effects on which both consumers and firms have come to rely. Moreover, the threat of costly subsequent legal proceedings will hang over firms’ pre- and post-merger investment decisions, and may thus reduce incentives to invest.

With their complaints, the FTC and state AGs threaten to undo this status quo. Even if current antitrust law allows it, pursuing this course of action threatens to quash the implicit assumption that regulatory clearance generally shields a merger from future antitrust scrutiny. Ex-post review of mergers and acquisitions does also entail some positive features, but the Facebook complaints fail to consider these complicated trade-offs. This oversight could hamper tech and other U.S. industries.

Mergers and uncertainty

Merger decisions are probabilistic. Of the thousands of corporate acquisitions each year, only a handful end up deemed “successful.” These relatively few success stories have to pay for the duds in order to preserve the incentive to invest.

Switching from ex-ante to ex-post review enables authorities to focus their attention on the most lucrative deals. It stands to reason that they will not want to launch ex-post antitrust proceedings against bankrupt firms whose assets have already been stripped. Instead, as with the Facebook complaint, authorities are far more likely to pursue high-profile cases that boost their political capital.

This would be unproblematic if:

  1. Authorities would commit to ex-post prosecution only of anticompetitive mergers; and
  2. If parties could reasonably anticipate whether their deals would be deemed anticompetitive in the future. 

If those were the conditions, ex-post enforcement would merely reduce the incentive to partake in problematic mergers. It would leave welfare-enhancing deals unscathed. But where firms could not have ex-ante knowledge that a given deal would be deemed anticompetitive, the associated error-costs should weigh against prosecuting such mergers ex post, even if such enforcement might appear desirable. The deterrent effect that would arise from such prosecutions would be applied by the market to all mergers, including efficient ones. Put differently, authorities might get the ex-post assessment right in one case, such as the Facebook proceedings, but the bigger picture remains that they could be wrong in many other cases. Firms will perceive this threat and it may hinder their investments.

There is also reason to doubt that either of the ideal conditions for ex-post enforcement could realistically be met in practice.Ex-ante merger proceedings involve significant uncertainty. Indeed, antitrust-merger clearance decisions routinely have an impact on the merging parties’ stock prices. If management and investors knew whether their transactions would be cleared, those effects would be priced-in when a deal is announced, not when it is cleared or blocked. Indeed, if firms knew a given merger would be blocked, they would not waste their resources pursuing it. This demonstrates that ex-ante merger proceedings involve uncertainty for the merging parties.

Unless the answer is markedly different for ex-post merger reviews, authorities should proceed with caution. If parties cannot properly self-assess their deals, the threat of ex-post proceedings will weigh on pre- and post-merger investments (a breakup effectively amounts to expropriating investments that are dependent upon the divested assets). 

Furthermore, because authorities will likely focus ex-post reviews on the most lucrative deals, their incentive effects can be particularly pronounced. Parties may fear that the most successful mergers will be broken up. This could have wide-reaching effects for all merging firms that do not know whether they might become “the next Facebook.” 

Accordingly, for ex-post merger reviews to be justified, it is essential that:

  1. Their outcomes be predictable for the parties; and that 
  2. Analyzing the deals after the fact leads to better decision-making (fewer false acquittals and convictions) than ex-ante reviews would yield.

If these conditions are not in place, ex-post assessments will needlessly weigh down innovation, investment and procompetitive merger activity in the economy.

Hindsight does not disentangle efficiency from market power

So, could ex-post merger reviews be so predictable and effective as to alleviate the uncertainties described above, along with the costs they entail? 

Based on the recently filed Facebook complaints, the answer appears to be no. We simply do not know what the counterfactual to Facebook’s acquisitions of Instagram and WhatsApp would look like. Hindsight does not tell us whether Facebook’s acquisitions led to efficiencies that allowed it to thrive (a pro-competitive scenario), or whether Facebook merely used these deals to kill off competitors and maintain its monopoly (an anticompetitive scenario).

As Sam Bowman and I have argued elsewhere, when discussing the leaked emails that spurred the current proceedings and on which the complaints rely heavily:

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically.

In fact, contrary to what some have argued, hindsight might even complicate matters (again from Sam and me):

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

In other words, ex-post reviews will, by definition, focus on mergers where today’s outcomes seem preordained — when, in fact, they were probabilistic. This will skew decisions toward finding anticompetitive conduct. If authorities think that Instagram was destined to become great, they are more likely to find that Facebook’s acquisition was anticompetitive because they implicitly dismiss the idea that it was the merger itself that made Instagram great.

Authorities might also confuse correlation for causality. For instance, the state AGs’ complaint ties Facebook’s acquisitions of Instagram and WhatsApp to the degradation of these services, notably in terms of privacy and advertising loads. As the complaint lays out:

127. Following the acquisition, Facebook also degraded Instagram users’ privacy by matching Instagram and Facebook Blue accounts so that Facebook could use information that users had shared with Facebook Blue to serve ads to those users on Instagram. 

180. Facebook’s acquisition of WhatsApp thus substantially lessened competition […]. Moreover, Facebook’s subsequent degradation of the acquired firm’s privacy features reduced consumer choice by eliminating a viable, competitive, privacy-focused option

But these changes may have nothing to do with Facebook’s acquisition of these services. At the time, nearly all tech startups focused on growth over profits in their formative years. It should be no surprise that the platforms imposed higher “prices” to users after their acquisition by Facebook; they were maturing. Further monetizing their platform would have been the logical next step, even absent the mergers.

It is just as hard to determine whether post-merger developments actually harmed consumers. For example, the FTC complaint argues that Facebook stopped developing its own photo-sharing capabilities after the Instagram acquisition,which the commission cites as evidence that the deal neutralized a competitor:

98. Less than two weeks after the acquisition was announced, Mr. Zuckerberg suggested canceling or scaling back investment in Facebook’s own mobile photo app as a direct result of the Instagram deal.

But it is not obvious that Facebook or consumers would have gained anything from the duplication of R&D efforts if Facebook continued to develop its own photo-sharing app. More importantly, this discontinuation is not evidence that Instagram could have overthrown Facebook. In other words, the fact that Instagram provided better photo-sharing capabilities does necessarily imply that it could also provide a versatile platform that posed a threat to Facebook.

Finally, if Instagram’s stellar growth and photo-sharing capabilities were certain to overthrow Facebook’s monopoly, why do the plaintiffs ignore the competitive threat posed by the likes of TikTok today? Neither of the complaints makes any mention of TikTok,even though it currently has well over 1 billion monthly active users. The FTC and state AGs would have us believe that Instagram posed an existential threat to Facebook in 2012 but that Facebook faces no such threat from TikTok today. It is exceedingly unlikely that both these statements could be true, yet both are essential to the plaintiffs’ case.

Some appropriate responses

None of this is to say that ex-post review of mergers and acquisitions should be categorically out of the question. Rather, such proceedings should be initiated only with appropriate caution and consideration for their broader consequences.

When undertaking reviews of past mergers, authorities do  not necessarily need to impose remedies every time they find a merger was wrongly cleared. The findings of these ex-post reviews could simply be used to adjust existing merger thresholds and presumptions. This would effectively create a feedback loop where false acquittals lead to meaningful policy reforms in the future. 

At the very least, it may be appropriate for policymakers to set a higher bar for findings of anticompetitive harm and imposition of remedies in such cases. This would reduce the undesirable deterrent effects that such reviews may otherwise entail, while reserving ex-post remedies for the most problematic cases.

Finally, a tougher system of ex-post review could be used to allow authorities to take more risks during ex-ante proceedings. Indeed, when in doubt, they could effectively  experiment by allowing  marginal mergers to proceed, with the understanding that bad decisions could be clawed back afterwards. In that regard, it might also be useful to set precise deadlines for such reviews and to outline the types of concerns that might prompt scrutiny  or warrant divestitures.

In short, some form of ex-post review may well be desirable. It could help antitrust authorities to learn what works and subsequently to make useful changes to ex-ante merger-review systems. But this would necessitate deep reflection on the many ramifications of ex-post reassessments. Legislative reform or, at the least, publication of guidance documents by authorities, seem like essential first steps. 

Unfortunately, this is the exact opposite of what the Facebook proceedings would achieve. Plaintiffs have chosen to ignore these complex trade-offs in pursuit of a case with extremely dubious underlying merits. Success for the plaintiffs would thus prove a pyrrhic victory, destroying far more than it intends to achieve.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]

Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).

Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.

The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:

And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.

That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.

* * *

Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.

The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient. 

Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies: 

Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.

Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):

Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:

And FTC Commissioner Rebecca Kelly Slaughter quickly called for a retrospective review of the deal:

The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.

These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?

Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.

What is a “killer acquisition”…?

Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.

For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper

“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

Moreover, the authors add that:

Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur

Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:

If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.

…And what isn’t a killer acquisition?

What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project. 

Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.  

In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.

As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.

The market realities of the ventilator market and its implications for the “killer acquisition” story

1. The mechanical ventilator market is highly competitive

As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive. 

A number of reports conclude that there is significant competition in the industry. One source cites at least seven large producers. Another report cites eleven large players. And, in the words of another report:

Medical ventilators market competition is intense. 

The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position. 

This intense competition, along with the small market shares of the merging firms, likely explains why the FTC declined to open an in-depth investigation into Covidien’s acquisition of Newport.

Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.

2. The value of the merger was too small

A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million

Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it. 

As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.

Indeed, as a recent article by Kevin Bryan and Erik Hovenkamp notes, an acquisition value out of line with current revenues may be an indicator of the significance of a pending acquisition in which enforcers may not actually know the value of the target’s underlying technology: 

[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.

The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.

We can apply this reasoning to Covidien’s acquisition of Newport: 

  • Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
  • As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out). 
  • For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”

If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market). 

The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.

Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.

“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”

If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry. 

Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.

Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success. 

Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.

3. Lessons from Covidien’s ventilator product decisions  

The killer acquisition claims are further weakened by at least four other important pieces of information: 

  1.  Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
  2. There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
  3. Covidien appears to have discontinued production of its own portable ventilator in 2014
  4. The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio

Covidien continued to develop and sell Newport’s ventilators

For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.

However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.

It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted). 

Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.

Covidien continued to develop and sell Newport’s other ventilators

Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.

If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them? 

At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.

There was little overlap between Covidien’s and Newport’s ventilators

Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators. 

This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:

Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).

In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines). 

Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:

[D]esigned to provide support to patients who do not require complex critical care ventilators.

A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:

This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.

The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:

This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.

And that:

Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.

In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.

Covidien appears to have discontinued production of its own portable ventilator in 2014

Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.

The product is reported on the company’s 2011, 2012 and 2013 annual reports:

Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….

(The PB540 was launched in 2009; the updated PB560 in 2010. The PB520 was the EU version of the device, launched in 2011).

But in 2014, the PB560 was no longer listed among the company’s ventilator products:  

Airway & Ventilation, which primarily includes sales of airway, ventilator and inhalation therapy products and breathing systems.

Key airway & ventilation products include: the Puritan Bennett™ 840 and 980 ventilators, the Newport™ e360 and HT70 ventilators….

Nor — despite its March 31 and April 1 “open sourcing” of the specifications and software necessary to enable others to produce the PB560 — did Medtronic appear to have restarted production, and the company did not mention the device in its March 18 press release announcing its own, stepped-up ventilator production plans.

Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.

(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).

Putting the Newport deal in context

Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices. 

That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one. 

By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.

Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces

So why was the Aura ventilator discontinued?

Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems. 

The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where

mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.

The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns. 

Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:

The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).

A press release issued by Medtronic confirms that

the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.

And the US Government RFP confirms that this was indeed an important requirement:

The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features: 

Flexibility to accommodate a wide patient population range from neonate to adult.

Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:

Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliverboth in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.

As Jason Crawford, an engineer and tech industry commentator, put it:

Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.

The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:

  • Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
  • Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
  • Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here). 

Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly. 

In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition. 

Ending the Aura project might have been an efficient outcome

As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.

A small company like Newport faces greater difficulties abandoning entrepreneurial projects because doing so can impair a privately held firm’s ability to raise funds for subsequent projects.

Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.  

While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965): 

Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.

Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.

Indeed, as Florian Ederer himself noted with respect to the Covidien/Newport merger, 

“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.

Concluding remarks

In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.

Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry. 

And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.

The story also falls prey to what Ronald Coase called “blackboard economics”:

What is studied is a system which lives in the minds of economists but not on earth. 

Numerous commentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations. 

The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence. 

Finally, what the New York Times piece does offer is a chilling tale of government failure.

The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US. 

The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit. 

And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”

The gist of these arguments is simple. The Amazon / Whole Foods merger would lead to the exclusion of competitors, with Amazon leveraging its swaths of data and pricing below costs. All of this begs a simple question: have these prophecies come to pass?

The problem with antitrust populism is not just that it leads to unfounded predictions regarding the negative effects of a given business practice. It also ignores the significant gains which consumers may reap from these practices. The Amazon / Whole foods offers a case in point.

Continue Reading...

Nicolas Petit is Professor of Law at the University of Liege (Belgium) and Research Professor at the University of South Australia (UniSA)

This symposium offers a good opportunity to look again into the complex relation between concentration and innovation in antitrust policy. Whilst the details of the EC decision in Dow/Dupont remain unknown, the press release suggests that the issue of “incentives to innovate” was central to the review. Contrary to what had leaked in the antitrust press, the decision has apparently backed off from the introduction of a new “model”, and instead followed a more cautious approach. After a quick reminder of the conventional “appropriability v cannibalizationframework that drives merger analysis in innovation markets (1), I make two sets of hopefully innovative remarks on appropriability and IP rights (2) and on cannibalization in the ag-biotech sector (3).

Appropriability versus cannibalization

Antitrust economics 101 teach that mergers affect innovation incentives in two polar ways. A merger may increase innovation incentives. This occurs when the increment in power over price or output achieved through merger enhances the appropriability of the social returns to R&D. The appropriability effect of mergers is often tied to Joseph Schumpeter, who observed that the use of “protecting devices” for past investments like patent protection or trade secrecy constituted a “normal elemen[t] of rational management”. The appropriability effect can in principle be observed at firm – specific incentives – and industry – general incentives – levels, because actual or potential competitors can also use the M&A market to appropriate the payoffs of R&D investments.

But a merger may decrease innovation incentives. This happens when the increased industry position achieved through merger discourages the introduction of new products, processes or services. This is because an invention will cannibalize the merged entity profits in proportions larger as would be the case in a more competitive market structure. This idea is often tied to Kenneth Arrow who famously observed that a “preinvention monopoly power acts as a strong disincentive to further innovation”.

Schumpeter’s appropriability hypothesis and Arrow’s cannibalization theory continue to drive much of the discussion on concentration and innovation in antitrust economics. True, many efforts have been made to overcome, reconcile or bypass both views of the world. Recent studies by Carl Shapiro or Jon Baker are worth mentioning. But Schumpeter and Arrow remain sticky references in any discussion of the issue. Perhaps more than anything, the persistence of their ideas denotes that both touched a bottom point when they made their seminal contribution, laying down two systems of belief on the workings of innovation-driven markets.

Now beyond the theory, the appropriability v cannibalization gravitational models provide from the outset an appealing framework for the examination of mergers in R&D driven industries in general. From an operational perspective, the antitrust agency will attempt to understand if the transaction increases appropriability – which leans in favour of clearance – or cannibalization – which leans in favour of remediation. At the same time, however, the downside of the appropriability v cannibalization framework (and of any framework more generally) may be to oversimplify our understanding of complex phenomena. This, in turn, prompts two important observations on each branch of the framework.

Appropriability and IP rights

Any antitrust agency committed to promoting competition and innovation should consider mergers in light of the degree of appropriability afforded by existing protecting devices (essentially contracts and entitlements). This is where Intellectual Property (“IP”) rights become relevant to the discussion. In an industry with strong IP rights, the merging parties (and its rivals) may be able to appropriate the social returns to R&D without further corporate concentration. Put differently, the stronger the IP rights, the lower the incremental contribution of a merger transaction to innovation, and the higher the case for remediation.

This latter proposition, however, rests on a heavy assumption: that IP rights confer perfect appropriability. The point is, however, far from obvious. Most of us know that – and our antitrust agencies’ misgivings with other sectors confirm it – IP rights are probabilistic in nature. There is (i) no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change. Arrow himself called for caution, noting that “Patent laws would have to be unimaginably complex and subtle to permit [such] appropriation on a large scale”. A thorough inquiry into the specific industry-strength of IP rights that goes beyond patent data and statistics thus constitutes a necessary step in merger review.

But it is not a sufficient one. The proposition that strong IP rights provide appropriability is essentially valid if the observed pre-merger market situation is one where several IP owners compete on differentiated products and as a result wield a degree of market power. In contrast, the proposition is essentially invalid if the observed pre-merger market situation leans more towards the competitive equilibrium and IP owners compete at prices closer to costs. In both variants, the agency should thus look carefully at the level and evolution of prices and costs, including R&D ones, in the pre-merger industry. Moreover, in the second variant, the agency ought to consider as a favourable appropriability factor any increase of the merging entity’s power over price, but also any improvement of its power over cost. By this, I have in mind efficiency benefits, which can arise as the result of economies of scale (in manufacturing but also in R&D), but also when the transaction combines complementary technological and marketing assets. In Dow/Dupont, no efficiency argument has apparently been made by the parties, so it is difficult to understand if and how such issues have played a role in the Commission’s assessment.

Cannibalization, technological change, and drastic innovation

Arrow’s cannibalization theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fails to capture that successful inventions create new technology frontiers, and with them entirely novel needs that even a monopolist has an incentive to serve. This can be understood with an example taken from the ag-biotech field. It is undisputed that progress in crop protection science has led to an expanding range of resistant insects, weeds, and pathogens. This, in turn, is one (if not the main) key drivers of ag-tech research. In a 2017 paper published in Pest Management Science, Sparks and Lorsbach observe that:

resistance to agrochemicals is an ongoing driver for the development of new chemical control options, along with an increased emphasis on resistance management and how these new tools can fit into resistance management programs. Because resistance is such a key driver for the development of new agrochemicals, a highly prized attribute for a new agrochemical is a new MoA [method of action] that is ideally a new molecular target either in an existing target site (e.g., an unexploited binding site in the voltage-gated sodium channel), or new/under-utilized target site such as calcium channels.

This, and other factors, leads them to conclude that:

even with fewer companies overall involved in agrochemical discovery, innovation continues, as demonstrated by the continued introduction of new classes of agrochemicals with new MoAs.

Sparks, Hahn, and Garizi make a similar point. They stress in particular that the discovery of natural products (NPs) which are the “output of nature’s chemical laboratory” is today a main driver of crop protection research. According to them:

NPs provide very significant value in identifying new MoAs, with 60% of all agrochemical MoAs being, or could have been, defined by a NP. This information again points to the importance of NPs in agrochemical discovery, since new MoAs remain a top priority for new agrochemicals.

More generally, the point is not that Arrow’s cannibalization theory is wrong. Arrow’s work convincingly explains monopolists’ low incentives to invest in substitute invention. Instead, the point is that Arrow’s cannibalization theory is narrower than often assumed in the antitrust policy literature. Admittedly, Arrow’s cannibalization theory is relevant in industries primarily driven by a process of cumulative innovation. But it is much less helpful to understand the incentives of a monopolist in industries subject to technological change. As a result of this, the first question that should guide an antitrust agency investigation is empirical in nature: is the industry under consideration one driven by cumulative innovation, or one where technology disruption, shocks, and serendipity incentivize drastic innovation?

Note that exogenous factors beyond technological frontiers also promote drastic innovation. This point ought not to be overlooked. A sizeable amount of the specialist scientific literature stresses the powerful innovation incentives created by changing dietary habits, new diseases (e.g. the Zika virus), global population growth, and environmental challenges like climate change and weather extremes. In 2015, Jeschke noted:

In spite of the significant consolidation of the agrochemical companies, modern agricultural chemistry is vital and will have the opportunity to shape the future of agriculture by continuing to deliver further innovative integrated solutions. 

Words of wisdom caution for antitrust agencies tasked with the complex mission of reviewing mergers in the ag-biotech industry?

On February 13 an administrative law judge (ALJ) at the California Public Utility Commission (CPUC) issued a proposed decision regarding the Comcast/Time Warner Cable (TWC) merger. The proposed decision recommends that the CPUC approve the merger with conditions.

It’s laudable that the ALJ acknowledges at least some of the competitive merits of the proposed deal. But the set of conditions that the proposed decision would impose on the combined company in order to complete the merger represents a remarkable set of unauthorized regulations that are both inappropriate for the deal and at odds with California’s legislated approach to regulation of the Internet.

According to the proposed decision, every condition it imposes is aimed at mitigating a presumed harm arising from the merger:

The Applicants must meet the conditions adopted herein in order to provide reasonable assurance that the proposed transaction will be in the public interest in accordance with Pub. Util. Code § 854(a) and (c).… We only adopt conditions which mitigate an effect of the merger in order to satisfy the public interest requirements of § 854.

By any reasonable interpretation, this would mean that the CPUC can adopt only those conditions that address specific public interest concerns arising from the deal itself. But most of the conditions in the proposed decision fail this basic test and seem designed to address broader social policy issues that have nothing to do with the alleged competitive effects of the deal.

Instead, without undertaking an analysis of the merger’s competitive effects, the proposed decision effectively accepts that the merger serves the public interest, while also simply accepting the assertions of the merger’s opponents that it doesn’t. In the name of squaring that circle, the proposed decision seeks to permit the merger to proceed, but then seeks to force the post-merger company to conform to the merger’s critics’ rather arbitrary view of their preferred market structure for the provision of cable broadband services in California.

For something — say, a merger — to be in the public interest, it need not further every conceivable public interest goal. This is a perversion of the standard, and it turns “public interest” into an unconstrained license to impose a regulatory wish-list on particular actors, outside of the scope of usual regulatory processes.

While a few people may have no problem with the proposed decision’s expansive vision of Internet access regulation, California governor Jerry Brown and the overwhelming majority of the California state legislature cannot be counted among the supporters of this approach.

In 2012 the state legislature passed by an overwhelming margin — and Governor Brown signed — SB 1161 (codified as Section 710 of the California Public Utilities Code), which expressly prohibits the CPUC from regulating broadband:

The commission shall not exercise regulatory jurisdiction or control over Voice over Internet Protocol and Internet Protocol enabled services except as required or expressly delegated by federal law or expressly directed to do so by statute or as set forth in [certain enumerated exceptions].”

The message is clear: The CPUC should not try to bypass clear state law and all institutional safeguards by misusing the merger clearance process.

While bipartisan majorities in the state house, supported by a Democratic governor, have stopped the CPUC from imposing new regulations on Internet and VoIP services through SB 1161, the proposed decision seeks to impose regulations through merger conditions that go far beyond anything permitted by this state law.

For instance, the proposed decision seeks to impose arbitrary retail price controls on broadband access:

Comcast shall offer to all customers of the merged companies, for a period of five years following the effective date of the parent company merger, the opportunity to purchase stand-alone broadband Internet service at a price not to exceed the price charged by Time Warner for providing that service to its customers, and at speeds, prices, and terms, at least comparable to that offered by Time Warner prior to the merger’s closing.

And the proposed decision seeks to mandate market structure in other insidious ways, as well, mandating specific broadband speeds, requiring a break-neck geographic expansion of Comcast’s service area, and dictating installation and service times, among other things — all without regard to the actual plausibility (or cost) of implementing such requirements.

But the problem is even more acute. Not only does the proposed decision seek to regulate Internet access issues irrelevant to the merger, it also proposes to impose conditions that would actually undermine competition.

The proposed decision would impose the following conditions on Comcast’s business VoIP and business Internet services:

Comcast shall offer Time Warner’s Business Calling Plan with Stand Alone Internet Access to interested CLECs throughout the combined service territories of the merging companies for a period of five years from the effective date of the parent company merger at existing prices, terms and conditions.

Comcast shall offer Time Warner’s Carrier Ethernet Last Mile Access product to interested CLECs throughout the combined service territories of the merging companies for a period of five years from the effective date of the parent company at the same prices, terms and conditions as offered by Time Warner prior to the merger.

But the proposed decision fails to recognize that Comcast is an also-ran in the business service market. Last year it served a small fraction of the business customers served by AT&T and Verizon, who have long dominated the business services market:

According to a Sept. 2011 ComScore survey, AT&T and Verizon had the largest market shares of all business services ISPs. AT&T held 20% of market share and Verizon held 12%. Comcast ranked 6th, with 5% of market share.

The proposed conditions would hamstring the upstart challenger Comcast by removing both product and pricing flexibility for five years – an eternity in rapidly evolving technology markets. That’s a sure-fire way to minimize competition, not promote it.

The proposed decision reiterates several times its concern that the combined Comcast/Time Warner Cable will serve more than 80% of California households, and “reduce[] the possibilities for content providers to reach the California broadband market.” The alleged concern is that the combined company could exercise anticompetitive market power — imposing artificially high fees for carrying content or degrading service of unaffiliated content and services.

The problem is Comcast and TWC don’t compete anywhere in California today, and they face competition from other providers everywhere they operate. As the decision matter-of-factly states:

Comcast and Time Warner do not compete with one another… [and] Comcast and Time Warner compete with other providers of Internet access services in their respective service territories.

As a result, the merger will actually have no effect on the number of competitive choices in the state; the increase in the statewide market share as a result of the deal is irrelevant. And so these purported competition concerns can’t be the basis for any conditions, let alone the sweeping ones set out in the proposed decision.

The stated concern about content providers finding it difficult to reach Californians is a red herring: the post-merger Comcast geographic footprint will be exactly the same as the combined, pre-merger Comcast/TWC/Charter footprint. Content providers will be able to access just as many Californians (and with greater speeds) as before the merger.

True, content providers that just want to reach some number of random Californians may have to reach more of them through Comcast than they would have before the merger. But what content provider just wants to reach some number of Californians in the first place? Moreover, this fundamentally misstates the way the Internet works: it is users who reach the content they prefer; not the other way around. And, once again, for literally every consumer in the state, the number of available options for doing so won’t change one iota following the merger.

Nothing shows more clearly how the proposed decision has strayed from responding to merger concerns to addressing broader social policy issues than the conditions aimed at expanding low-price broadband offerings for underserved households. Among other things, the proposed conditions dramatically increase the size and scope of Comcast’s Internet Essentials program, converting this laudable effort from a targeted program (that uses a host of tools to connect families where a child is eligible for the National School Lunch Program to the Internet) into one that must serve all low-income adults.

Putting aside the damage this would do to the core Internet Essentials’ mission of connecting school age children by diverting resources from the program’s central purpose, it is manifestly outside the scope of the CPUC’s review. Nothing in the deal affects the number of adults (or children, for that matter) in California without broadband.

It’s possible, of course, that Comcast might implement something like an expanded Internet Essentials program without any prodding; after all, companies implement (and expand) such programs all the time. But why on earth should regulators be able to define such an obligation arbitrarily, and to impose it on whatever ISP happens to be asking for a license transfer? That arbitrariness creates precisely the sort of business uncertainty that SB 1161 was meant to prevent.

The same thing applies to the proposed decision’s requirement regarding school and library broadband connectivity:

Comcast shall connect and/or upgrade Internet infrastructure for K-12 schools and public libraries in unserved and underserved areas in Comcast’s combined California service territory so that it is providing high speed Internet to at least the same proportion of K-12 schools and public libraries in such unserved and underserved areas as it provides to the households in its service territory.

No doubt improving school and library infrastructure is a noble goal — and there’s even a large federal subsidy program (E-Rate) devoted to it. But insisting that Comcast do so — and do so to an extent unsupported by the underlying federal subsidy program already connecting such institutions, and in contravention of existing provider contracts with schools — as a condition of the merger is simple extortion.

The CPUC is treating the proposed merger like a free-for-all, imposing in the name of the “public interest” a set of conditions that it would never be permitted to impose absent the gun-to-the-head of merger approval. Moreover, it seeks to remake California’s broadband access landscape in a fashion that would likely never materialize in the natural course of competition: If the merger doesn’t go through, none of the conditions in the proposed decision and alleged to be necessary to protect the public interest will exist.

Far from trying to ensure that Comcast’s merger with TWC doesn’t erode competitive forces to the detriment of the public, the proposed decision is trying to micromanage the market, simply asserting that the public interest demands imposition of it’s subjective and arbitrary laundry list of preferred items. This isn’t sensible regulation, it isn’t compliant with state law, and it doesn’t serve the people of California.

James Cooper is Director, Research and Policy at the Law & Economics Center at George Mason University School of Law

The FTC has long been on a quest to find the elusive species of conduct that Section 5 alone can tackle.  A series of early Supreme Court cases interpreting the FTC Act – the most recent and widely cited of which is more than forty years old (FTC v. Sperry & Hutchinson Co., 405 U.S. 233 (1972)) –appeared to grant the FTC wide ranging powers to condemn methods of competition as “unfair.”[1]  A series of judicial setbacks in the 1980s and early 1990s, however, scaled back Section 5’s domain.[2]

Since 1992, the FTC has continued to define Section 5’s reach internally – through settlements primarily involving two classes of conduct: so-called “invitations to collude” (ITC);[3] and breaches of agreements to disclose or to license standard-essential patents (SEPs).[4] Similar in spirit to ITCs, the Commission has also alleged pure Section 5 violations in cases involving sharing of competitively sensitive information.[5]

In addition to these lines of cases, the FTC has used Section 5 in two additional matters: the “CD MAP” cases, involving the parallel adoption by major record companies of “minimum advertised price” restrictions; and the suit against Intel for engaging in exclusionary conduct, including deception and certain pricing practices.

Absent external appellate review, however, it remains unclear whether Congress intended for these classes of conduct to be illegal as “unfair methods of competition.”  Because settlement with the FTC will be preferable to litigation in a wide array of circumstances, what is considered illegal under Section 5 largely has become whatever at least three Commissioners can agree on.  Accordingly, there is still a relatively large zone in which the FTC can develop this quasi Section 5 common law with little fear of triggering litigation, and the concomitant specter of judicial scrutiny.

The recent Google investigation provides some evidence as to just how large this zone of discretion may be.  Although the Commission eventually decided to close its investigation into Google’s search practices – and was able to extract informal concessions from Google related to “scraping” and failures to facilitate “multihoming” – that the Commission would entertain a case premised on such conduct hints at a willingness to make arguments that clear Sherman Act precedent involving duties to aid rivals does not apply to the Section 5 actions, or that misappropriation can serve as the basis for a Section 5 theory.  The Commission’s settlement with Google concerning breaches of commitments to license SEPs on FRAND terms, moreover, continued its application of antitrust and consumer protection law to contractual disputes between sophisticated businesses.

Parsing the statements in Google suggest at least four directions in which at least one commissioner was willing to expand Section 5 beyond the Sherman Act:  duties to aid rivals, misappropriation, failure to disclose the relationship between data collection and market power, and breach of an agreement to license SEPs on FRAND terms.  Further, in two instances, at least one commissioner additionally was willing to declare the same conduct an unfair act or practice.  This is far from a coherent framework for Section 5.

The FTC’s discretion under Section 5 potentially comes at a steep price.  First, it creates uncertainty.  If businesses are unsure about where the line between legal an illegal behavior is drawn, they rationally will take too much care to avoid violating the law, which in antitrust can mean competing less aggressively.  Second, the more discretion the FTC enjoys to condemn a practice as an unfair method of competition, the more competition will be channeled from the marketplace to 600 Pennsylvania Avenue.  Although this may be a good development for economists and attorneys, it is bad for consumers.

The FTC could go a long way toward solving this problem if it were to take a cue from the history of its consumer protection program.  The FTC’s overreach in the 1970s earned it the moniker “national nanny,” nearly shut the agency down.  As part of a program to instill public – and more importantly Congressional – trust, the FTC adopted a series of binding policy statements that made consumer harm the touchstone of its authority to challenge “unfair or deceptive acts or practices” (UDAP authority).

A similar effort at self-restraint that limits the FTC’s UMC authority could help reduce uncertainty and rent seeking.  Both Commissioners Ohlhausen and Wright should be commended on their impressive efforts to start this discussion.  In my first post, however, I’d like to discuss a more dramatic path that neither has addressed: confining Section 5 to the Sherman Act.

In many ways the search for Section 5’s domain beyond the Sherman Act is a solution in search of a problem.  There is certainly no consensus that the Sherman Act – even after some recent limitations imposed by cases like Twombly, Trinko, and Credit Suisse – is no longer fit for the task of policing anticompetitive conduct.  It may well be that the FTC is trying to sell a product that nobody needs.  Consequently, the costs of abandoning an expansive Section 5 may be small; with the exceptions of ITCs and information sharing involving small firms, the rest of the FTC’s Section 5 portfolio also can be reached under existing Sherman Act theories (albeit with more difficulty), or handled through other bodies of law or self-regulation.

For example, under the D.C. Circuit’s decision in Rambus, Section 2 is available for cases involving deception at the time of the standard adoption that materially affected the choice of standard.[6] Accordingly, a Section 2 case could be made out if the Commission could show that the defendant either concealed an SEP or if a FRAND commitment was made in bad faith and affected the choice of standard.  Even if deception cannot be show, breaches of FRAND commitments involving SEPs that result in hold-up necessarily involve legal review; the court (or ITC) must decide whether to grant the SEP holder’s request for an injunction (or an exclusion order), and the alleged infringer has opportunities to raise a variety of contract and patent law objections.  Likewise, bundling, predatory pricing, and deception claims like those in Intel are clearly cognizable under Sherman Section 2 (which is why Intel was pled both ways).

Confining Section 5 to the Sherman Act would also have the advantage reduce arbitrage opportunities between the FTC and the Antitrust Division.  As Commissioner Ohlhausen has noted, if the same conduct results in different legal treatment depending on which agency wins clearance – as it arguably would have in the Google investigation – these routine bureaucratic procedures could have substantial influence on ultimate liability.

Although this conduct is reachable under the Sherman Act, many of the cases would be difficult to win.  To the extent that these Sherman Act rules reasonably sort anticompetitive from procompetitive or benign conduct, however, forcing the Commission to satisfy Sherman Act standards would assure that its actions promote consumer welfare.

The only types of conduct that clearly slip out of the FTC’s reach when Section 5 is confined to the Sherman Act are ITCs and information sharing involving firms with low market shares.  The costs of letting this conduct go, however, are likely minimal.  Although most would agree that this conduct is  worth stopping, the FTC has pursued less than ten of these cases in the past 20 years.  Even including deterrence effects, removing ITCs and information sharing cases from the FTC portfolio is unlikely to cause a great deal of consumer harm.  Most managers are probably aware that price fixing is illegal, and it is doubtful that anybody proposes a cartel or shares information without hoping that the other party will get on board.  At the same time, these Section 5 cases are obscure – lurking in a series of consent orders on the FTC’s web site.  The sophisticated antitrust bar likely is familiar with this strain of Section 5 activity, but outside of the clients counseled by top tier law firms, it is not obvious that many businesses are aware of there existence.  Without awareness, there can be no deterrence.  Further, if either of these acts leads to a conspiracy or significant market power, it will be reachable under the Sherman Act.

Finally, removing the FTC’s Section 5 authority will not diminish its role as an antitrust norm creator.  Indeed, over its near 100-year history, however, the FTC has not used Section 5 to implement any important antitrust norms.[7]  That is not to say that the FTC has lacked influence over the development of antitrust jurisprudence – to the contrary, it clearly has, but within the confines of the Sherman Act.  For example, the FTC has made major positive contribution in the fields of joint conduct,[8] state action,[9] Noerr-Pennington,[10] the treatment of professional regulation,[11] and most recently in the context of pharmaceutical reverse settlements.[12]

Of course, if Section 5 is to offer nothing beyond the Sherman Act, that begs the question of whether the FTC is needed at all? In this manner, the quest for a species of harmful conduct that is reachable only through Section 5 is an existential one.  Does it make sense to have two agencies enforcing the same law?[13]  Probably not.  The FTC’s comparative advantage over DOJ lays in its research capability, and of course its consumer protection mission.  Accordingly, stripped of a unique antirust enforcement authority, one possible reorganization would be to house enforcement in DOJ, with the FTC providing competition and consumer protection policy R&D that would feed into case selection designed to improve these bodies of law.

However attractive it may be from a policy standpoint, jettisoning Section 5 beyond the Sherman Act is a political non-starter; Congress would never permit the FTC to abrogate its UMC power.  Indeed, recall the nasty fight that erupted when the FTC and DOJ attempted to reach a clearance agreement in 2002.  Accordingly, a more realistic path for the Commission to take would be to spell out the circumstances under which it would consider a stand alone Section 5 case.[14]  I will turn to this in my next posting.


[1] See, e.g., FTC v. Sperry & Hutchinson Co., 405 U.S. 233 (1972); William E. Kovacic & Marc Winerman, Competition Policy and the Application of Section 5 of the Federal Trade Commission Act, 76 Antitrust L.J. 929, 930-31 (2010).

[2] FTC v. Boise Cascade, 637 F.2d 573, 581 (9th Cir. 1980); Official Airline Guides, Inc. v. FTC, 630 F.2d 920 (2d. Cir. 1980); E.I DuPont de Nemours & Co. v. FTC, 729 F.2d 128 (2d Cir. 1984).  The FTC’s last judicially decided Section 5 action was in 1992. FTC v. Abbott Labs, 853 F. Supp. 526 (D.D.C. 1992).

[3] In re U-Haul Int’l, Inc. (June 9, 2010); In re Valassis Communications, Inc. (April 19, 2006); In re Stone Container Corp. (June 3, 1998); In re Precision Moulding Co. (Sept. 3, 1996); In re YKK(USA) (July 1, 1993); In re A.E. Clevite, Inc. (June 8, 1993); In re Quality Trailer Prods. Corp. (Nov. 5, 1992).

[4] In re Dell Computer (1996); In re Negotiated Data Systems, Inc. (2008); In re Robert Bosch GmbH (2012); In re Google, Inc. (2013).

[5] In re Bosely (2013); In re Nat’l Ass’n of Music Merchants (2009).

[6] Rambus Inc. v. FTC, 522 F.3d 456 (D.C. Cir. 2008); see also Broadcom Corp. v. Qualcomm Inc., 501 F.3d 297 (3rd Cir. 2007); Microsoft, 253 F.3d 3, 76 (D.C. Cir. 2001); Conwood Co. v. U.S. Tobacco Co., 290 F.3d 768 (6th Cir. 2002).

[7] See Kovaic & Winerman, supra note__, at 941 (“The FTC’s record of appellate litigation involving applications of Section 5 that go beyond prevailing antitrust norms is uninspiring.”).

[8] See Polygram Holding, Ltd. v. FTC, 416 F.3d 29 (D.C. Cir. 2005).

[9] See FTC v. Ticor Ins. Co, 504 U.S. 621 (1992); North Carolina Board of Dental Examiners v. FTC, No. 12-1172 (4th Cir. May 31, 2013).

[10] See FTC v. Phoebe Putney Healthcare System, Inc. (Feb. 13, 2013); FTC v. Superior Court Trial Lawyers Ass’n, 493 U.S. 411 (1990).

[11] See FTC v. Indiana Federation of Dentists, 476 U.S. 447 (1986); FTC v. California Dental Association, 526 U.S. 756 (1999).

[12] FTC v. Actavis, Inc., Slip Op. No. 12-416 (June 16, 2013).

[13] See Kovacic & Winerman

[14] Commissioners Ohlhausen and Wright have recently begun this discussion.  See __.

The ridiculousness currently emanating from ICANN and the NTIA (see these excellent posts from Milton Mueller and Eli Dourado on the issue) over .AMAZON, .PATAGONIA and other “geographic”/commercial TLDs is precisely why ICANN (and, apparently, the NTIA) is a problematic entity as a regulator.

The NTIA’s response to ICANN’s Governmental Advisory Committee’s (GAC) objection to Amazon’s application for the .AMAZON TLD (along with similar applications from other businesses for other TLDs) is particularly troubling, as Mueller notes:

In other words, the US statement basically says “we think that the GAC is going to do the wrong thing; its most likely course of action has no basis in international law and is contrary to vital policy principles the US is supposed to uphold. But who cares? We are letting everyone know that we will refuse to use the main tool we have that could either stop GAC from doing the wrong thing or provide it with an incentive to moderate its stance.”

Competition/antitrust issues don’t seem to be the focus of this latest chapter in the gTLD story, but it is instructive on this score nonetheless. As Berin Szoka and I wrote in ICLE’s comment to ICANN on gTLDS:

Among the greatest threats to this new “land rush” of innovation is the idea that ICANN should become a competition regulator, deciding whether to approve a TLD application based on its own competition analysis. But ICANN is not a regulator. It is a coordinator. ICANN should exercise its coordinating function by applying the same sort of analysis that it already does in coordinating other applications for TLDs.

* * *

Moreover, the practical difficulties in enforcing different rules for generic TLDs as opposed to brand TLDs likely render any competition pre-clearance mechanism unworkable. ICANN has already determined that .brand TLDs can and should be operated as closed domains for obvious and good reasons. But differentiating between, say .amazon the brand and .amazon the generic or .delta the brand and .delta the generic will necessarily result in arbitrary decisions and costly errors.

Of most obvious salience: implicit in the GAC’s recommendation is the notion that somehow Amazon.com is sufficiently different than .AMAZON to deny Amazon’s ownership of the latter. But as Berin and I point out:

While closed gTLDs might seem to some to limit competition, that limitation would occur only within a particular, closed TLD. But it has every potential to be outweighed by the dramatic opening of competition among gTLDs, including, importantly, competition with .com.

In short, the market for TLDs and domain name registrations do not present particular competitive risks, and there is no a priori reason for ICANN to intervene prospectively.

In other words, treating Amazon.com and .AMAZON as different products, in different relevant markets, is a mistake. No doubt Amazon.com would, even if .AMAZON were owned by Amazon, remain for the foreseeable future the more relevant site. If Latin American governments are concerned with cultural and national identity protection, they should (not that I’m recommending this) focus their objections on Amazon.com. But the reality is that Amazon.com doesn’t compromise cultural identity, and neither would Amazon’s ownership of .AMAZON. Rather, the wide availability of new TLDs opens up an enormous range of new competitive TLD and SLD constraints on existing, dominant .COM SLDs, any number of which could be effective in promoting and preserving cultural and national identities.

By the way – Amazonia.com, Amazonbasin.com and Amazonrainforest.com, presumably among many others, look to be unused and probably available for purchase. Perhaps opponents of Amazon’s ownership of .AMAZON should set their sights on those or other SLDs and avoid engaging in the sort of politicking that will ultimately ruin the Internet.

New York Times columnist Gretchen Morgenson is arguing for a “pre-clearance”  approach to regulating new financial products:

The Food and Drug Administration vets new drugs before they reach the market. But imagine if there were a Wall Street version of the F.D.A. — an agency that examined new financial instruments and ensured that they were safe and benefited society, not just bankers.  How different our economy might look today, given the damage done by complex instruments during the financial crisis.

The idea Morgenson is advocating was set forth by law professor Eric Posner (one of my former profs) and economist E. Glen Weyl in this paper.  According to Morgenson,

[Posner and Weyl] contend that new instruments should be approved by a “financial products agency” that would test them for social utility. Ideally, products deemed too costly to society over all — those that serve only to increase speculation, for example — would be rejected, the two professors say.

While I have not yet read the paper, I have some concerns about the proposal, at least as described by Morgenson.

First, there’s the knowledge problem.  Even if we assume that agents of a new “Financial Products Administration” (FPA) would be completely “other-regarding” (altruistic) in performing their duties, how are they to know whether a proposed financial instrument is, on balance, beneficial or detrimental to society?  Morgenson suggests that “financial instruments could be judged by whether they help people hedge risks — which is generally beneficial — or whether they simply allow gambling, which can be costly.”  But it’s certainly not the case that speculative (“gambling”) investments produce no social value.  They generate a tremendous amount of information because they reflect the expectations of hundreds, thousands, or millions of investors who are placing bets with their own money.  Even the much-maligned credit default swaps, instruments Morgenson and the paper authors suggest “have added little to society,” provide a great deal of information about the creditworthiness of insureds.  How is a regulator in the FPA to know whether the benefits a particular financial instrument creates justify its risks? 

When regulators have engaged in merits review of investment instruments — something the federal securities laws generally eschew — they’ve often screwed up.  State securities regulators in Massachusetts, for example, once banned sales of Apple’s IPO shares, claiming that the stock was priced too high.  Oops.

In addition to the knowledge problem, the proposed FPA would be subject to the same institutional maladies as its model, the FDA.  The fact is, individuals do not cease to be rational, self-interest maximizers when they step into the public arena.  Like their counterparts in the FDA, FPA officials will take into account the personal consequences of their decisions to grant or withhold approvals of new products.  They will know that if they approve a financial product that injures some investors, they’ll likely be blamed in the press, hauled before Congress, etc.  By contrast, if they withhold approval of a financial product that would be, on balance, socially beneficial, their improvident decision will attract little attention.  In short, they will share with their counterparts in the FDA a bias toward disapproval of novel products.

In highlighting these two concerns, I’m emphasizing a point I’ve made repeatedly on TOTM:  A defect in private ordering is not a sufficient condition for a regulatory fix.  One must always ask whether the proposed regulatory regime will actually leave the world a better place.  As the Austrians taught us, we can’t assume the regulators will have the information (and information-processing abilities) required to improve upon private ordering.  As Public Choice theorists taught us, we can’t assume that even perfectly informed (but still self-interested) regulators will make socially optimal decisions.  In light of Austrian and Public Choice insights, the Posner & Weyl proposal — at least as described by Morgenson — strikes me as problematic.  [An additional concern is that the proposed pre-clearance regime might just send financial activity offshore.  To their credit, the authors acknowledge and address that concern.]

First, Google had the audacity to include a map in search queries suggesting a user wanted a map.  Consumers liked it.  Then came video.  Then, they came for the beer:

Google’s first attempt at brewing has resulted in a beer that taps ingredients from all across the globe. They teamed up with Delaware craft brewery Dogfish Head to make “URKontinent,” a Belgian Dubbel style beer with flavors from five different continents.

No word yet from the Google’s antitrust-wielding critics whether integration into beer will exclude rivals who vertical search engines who, without access to the beer, have no chance to compete.  Yes, there are specialized beer search sites if you must know (or local beer search).  Or small breweries who, because of Google’s market share in search, cannot compete against Dogfish Head’s newest product.  But before we start the new antitrust investigation, Google has offered some new facts to clarify matters:

Similarly, the project with Dogfish Head brewery was a Googler-driven project organized by a group of craftbrewery aficionados across the company. While our Googlers had fun advising on the creation of a beer recipe, we aren’t receiving any proceeds from the sale of the beer and we have no plans to enter the beer business.

Whew.  What a relief.  But, I’m sure the critics will be watching just in case to see if Dogfish Head jumps in the search rankings.  Donating time and energy to the creation of beer is really just a gateway to more serious exclusionary conduct, right?  And Section 5 of the FTC Act applies to incipient conduct in the beer market, clearly.  Or did the DOJ get beer-related Google activities in the clearance arrangement between the agencies?

One additional observation on the WSJ story Paul mentioned.  Much has been written about the strained relationship between the FTC and DOJ in antitrust matters.  There has, of course, never been a more descriptive and entertaining version of these tensions than the one offered by former Chairman and now Commissioner Kovacic who observed that the so-called sister agencies amounted to “an archipelago of policy makers with very inadequate ferry service between the islands” where “too many instances when you go to visit those islands the inhabitants come out with sticks and torches and try to chase you away.”

Its been awhile since that particular description, but the quotes in the Journal article really suggest a remarkable level of tension between the agencies.  Consider:

  • Commissioner Rosch describing the DOJ as “an arm of the administration,” that “can and will enforce the antitrust laws only insofar as that is consistent with administration policy;”  or
  • Even going so far as to question whether AAG Varney was able to take an unbiased view of health care related matters because of her past representation of the American Hospital Association.

Fairly serious stuff.  Perhaps the Commissioner and AAG just need some topic upon which they can both agree?

The inter-agency clearance fights, and especially the high-profile ones that bring out the sticks and torches, significantly undermine the mission of antitrust enforcement institutions.  Commissioner Kovacic closes the article with the basic, but critical point:   “The fact and appearance of a contest are bad for the coherence,” says Mr. Kovacic. “If you develop a perception that you’re going to get different outcomes depending on where [a deal] goes, your system suffers.”

 

Yesterday the final Horizontal Merger Guidelines Review workshop was held and, among other antitrust luminaries, our own Josh Wright participated.  We look forward to a report from the front lines.

Meanwhile, Assistant Attorney General Varney’s comments are available on the interwebs.  Overall her remarks seem uncontroversial, especially following on the heels of the agency’s (surprising?) clearance of the Live Nation/Ticketmaster merger with conditions (but see the agency’s challenge of the consummated Dean Foods/Foremost Farms merger, about which I will have more to say in a subsequent post).  But I did find one section quite a bit troubling.  Acknowledging that agency practice did not hew slavishly to the Guidelines’ “five-step analytical process” for assessing markets and market share, Varney noted that:

Implicit in deemphasizing the sequential nature of the Guidelines inquiry is a recognition that defining markets and measuring market shares may not always be the most effective starting point for many types of merger reviews. Remember, the purpose of defining a market and assessing shares is to assess potential harm. When it is clear, for instance, that either certain vulnerable customers are likely to be harmed by a merger, or that certain customers have in fact been harmed by a consummated merger, the need to define a market to assess likely competitive effects is diminished. For instance, the consumer harm that followed from the consummated Evanston hospital transaction lessened the importance of the Commission’s market definition and market share analyses in that matter. Our panelists have largely confirmed the view that market definition should not be an end-all exercise. Rather, it is something to be incorporated in a more integrated, fact-driven analysis directed at competitive effects.

I am among the many commenters who have criticized the Guidelines’ approach to market definition and market share–my submission to the workshops is here.  There has also been a strong movement recently to do away with market definition in some unilateral effects analysis and to replace it with the UPP analysis promoted most recently in the Farrel & Shapiro article (pdf).  Interestingly, while Varney is previously  on record opposing this movement, elsewhere in this speech she seems to endorse it:

There is a growing body of evidence that measures of upward pricing pressure, which focus on diversion ratios, and price-cost margins, can be highly informative in assessing the likelihood of unilateral pricing effects.

But this is in a different section of the speech, UPP remains an analytical approach (as opposed to the class of cases Varney is concerned with here where harm to certain consumers is simply “clear”), and it does not seem to be what she’s talking about in the quote above.  Here she seems to mean something else–and I fear it is something troubling.

Taken literally, what Varney is saying is that an ad hoc (ok, fine–an “integrated, fact-driven”) determination that some customers (“vulnerable” ones, whatever that means) may be made worse off by a merger lessens the need for a more comprehensive assessment of overall competitive dynamics within a relevant market.  But I don’t know what this means, frankly.  In the first place, how is the agency supposed to know that some customers are likely to be harmed if it hasn’t assessed the availability of substitutes and the extent of diversion?  One can certainly criticize the method by which this assessment is made, but a conclusion of harm absent this assessment seems absurd.  Moreover, if Varney really means that all that is required to condemn a merger is that any customers may be harmed, no matter how many are also benefited, at a minimum it sounds like she’s writing the efficiencies defense out of the Guidelines, but she may even be justifying condemnation of any and all mergers–after all, how many actions in the marketplace impose a cost on literally no one?  If, as seems likely, it is inframarginal consumers who are “likely” “vulnerable” to price increases (where “vulnerable” may be a synonym for “having inelastic demand”), then this test is a repudiation of the entire economic edifice of modern merger analysis (parallel to my discussion of the DC Circuit’s Whole Foods decision here).

And Varney’s reference to the FTC’s Evanston Northwestern case is a bit of a sleight of hand.  That was indeed a consummated transaction, where the requisite harm was shown by direct pricing evidence following the merger.  That’s quite a bit different than tossing out the Merger Guidelines in a non-consummated merger case because it is “clear” that “vulnerable” consumers are “likely” to be harmed.  And even the Evanston Northwestern case is not without controversy, precisely because forsaking the Guidelines’ analytical framework also forsook clarity in the analysis (see, for example, the strong criticism of the case here).

According to the Guidelines themselves,

The unifying theme of the Guidelines is that mergers should not be permitted to create or enhance market power or to facilitate its exercise. Market power to a seller is the ability profitably to maintain prices above competitive levels for a significant period of time.

The presence of some harm (how much, by the way?) to some consumers does not necessarily equate to market power, unless the definition is simply tautological.  Under the Guidelines approach, this would require a market definition so narrow (defined to include only the harmed customers) that it would be economically meaningless (the classic “red-haired, bearded, one-eyed, man-with-a-limp classification” condemned by Justice Fortas in his Grinnell dissent).  Sidestepping the Guideline’s analytical framework by equating the exercise of market power with a theoretical price increase that wouldn’t be cognizable under the Guidelines (and wouldn’t exist in the real world) is not merely an analytical shortcut, it is a subversion of the whole analysis.  Again, see the fundamental errors of the Whole Foods case.

Now, in the end, all she may be saying is that sometimes there is direct evidence of harm, properly-statistically attributable to the merger, many years after a transaction has been consummated.  Or that the risk of harm is so self-evident that a formal analysis isn’t required–say, when there are simply no other competitors in a relevant geographic area, no timely entry is possible (for some reason . . . ) and a significant number of customers is affected.  I suppose this could happen.  I would expect in such circumstances that the parties wouldn’t even bother attempting the merger, but maybe once in a while the situation could arise.  But I just can’t fathom that this could be a significant enough possibility that it would rise to the level of an important policy speech on the Merger Guidelines by the AAG.

So what is Varney saying?  Anyone?