Archives For startups

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

For a potential entrepreneur, just how much time it will take to compete, and the barrier to entry that time represents, will vary greatly depending on the market he or she wishes to enter. A would-be competitor to the likes of Subway, for example, might not find the time needed to open a sandwich shop to be a substantial hurdle. Even where it does take a long time to bring a product to market, it may be possible to accelerate the timeline if the potential profits are sufficiently high. 

As Steven Salop notes in a recent paper, however, there may be cases where long periods of production time are intrinsic to a product: 

If entry takes a long time, then the fear of entry may not provide a substantial constraint on conduct. The firm can enjoy higher prices and profits until the entry occurs. Even if a strong entrant into the 12-year-old scotch market begins the entry process immediately upon announcement of the merger of its rivals, it will not be able to constrain prices for a long time. [emphasis added]

Salop’s point relates to the supply-side substitutability of Scotch whisky (sic — Scotch whisky is spelt without an “e”). That is, to borrow from the European Commission’s definition, whether “suppliers are able to switch production to the relevant products and market them in the short term.” Scotch is aged in wooden barrels for a number of years (at least three, but often longer) before being bottled and sold, and the value of Scotch usually increases with age. 

Due to this protracted manufacturing process, Salop argues, an entrant cannot compete with an incumbent dominant firm for however many years it would take to age the Scotch; they cannot produce the relevant product in the short term, no matter how high the profits collected by a monopolist are, and hence no matter how strong the incentive to enter the market. If I wanted to sell 12-year-old Scotch, to use Salop’s example, it would take me 12 years to enter the market. In the meantime, a dominant firm could extract monopoly rents, leading to higher prices for consumers. 

But can a whisky producer “enjoy higher prices and profits until … entry occurs”? A dominant firm in the 12-year-old Scotch market will not necessarily be immune to competition for the entire 12-year period it would take to produce a Scotch of the same vintage. There are various ways, both on the demand and supply side, that pressure could be brought to bear on a monopolist in the Scotch market.

One way could be to bring whiskies that are being matured for longer-maturity bottles (like 16- or 18-year-old Scotches) into service at the 12-year maturity point, shifting this supply to a market in which profits are now relatively higher. 

Alternatively, distilleries may try to produce whiskies that resemble 12-year old whiskies in flavor with younger batches. A 2013 article from The Scotsman discusses this possibility in relation to major Scottish whisky brand Macallan’s decision to switch to selling exclusively No-Age Statement (NAS — they do not bear an age on the bottle) whiskies: 

Experts explained that, for example, nine and 11-year-old whiskies—not yet ready for release under the ten and 12-year brands—could now be blended together to produce the “entry-level” Gold whisky immediately.

An aged Scotch cannot contain any whisky younger than the age stated on the bottle, but an NAS alternative can contain anything over three years (though older whiskies are often used to capture a flavor more akin to a 12-year dram). For many drinkers, NAS whiskies are a close substitute for 12-year-old whiskies. They often compete with aged equivalents on quality and flavor and can command similar prices to aged bottles in the 12-year category. More than 80% of bottles sold bear no age statement. While this figure includes non-premium bottles, the share of NAS whiskies traded at auction on the secondary market, presumably more likely to be premium, increased from 20% to 30% in the years between 2013 and 2018.

There are also whiskies matured outside of Scotland, in regions such as Taiwan and India, that can achieve flavor profiles akin to older whiskies more quickly, thanks to warmer climates and the faster chemical reactions inside barrels they cause. Further increases in maturation rate can be brought about by using smaller barrels with a higher surface-area-to-volume ratio. Whiskies matured in hotter climates and smaller barrels can be brought to market even more quickly than NAS Scotch matured in the cooler Scottish climate, and may well represent a more authentic replication of an older barrel. 

“Whiskies” that can be manufactured even more quickly may also be on the horizon. Some startups in the United States are experimenting with rapid-aging technology which would allow them to produce a whisky-like spirit in a very short amount of time. As detailed in a recent article in The Economist, Endless West in California is using technology that ages spirits within 24 hours, with the resulting bottles selling for $40 – a bit less than many 12-year-old Scotches. Although attempts to break the conventional maturation process are nothing new, recent attempts have won awards in blind taste-test competitions.

None of this is to dismiss Salop’s underlying point. But it may suggest that, even for a product where time appears to be an insurmountable barrier to entry, there may be more ways to compete than we initially assume.

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.

In current discussions of technology markets, few words are heard more often than “platform.” Initial public offering (IPO) prospectuses use “platform” to describe a service that is bound to dominate a digital market. Antitrust regulators use “platform” to describe a service that dominates a digital market or threatens to do so. In either case, “platform” denotes power over price. For investors, that implies exceptional profits; for regulators, that implies competitive harm.

Conventional wisdom holds that platforms enjoy high market shares, protected by high barriers to entry, which yield high returns. This simple logic drives the market’s attribution of dramatically high valuations to dramatically unprofitable businesses and regulators’ eagerness to intervene in digital platform markets characterized by declining prices, increased convenience, and expanded variety, often at zero out-of-pocket cost. In both cases, “burning cash” today is understood as the path to market dominance and the ability to extract a premium from consumers in the future.

This logic is usually wrong. 

The Overlooked Basics of Platform Economics

To appreciate this perhaps surprising point, it is necessary to go back to the increasingly overlooked basics of platform economics. A platform can refer to any service that matches two complementary populations. A search engine matches advertisers with consumers, an online music service matches performers and labels with listeners, and a food-delivery service matches restaurants with home diners. A platform benefits everyone by facilitating transactions that otherwise might never have occurred.

A platform’s economic value derives from its ability to lower transaction costs by funneling a multitude of individual transactions into a single convenient hub.  In pursuit of minimum costs and maximum gains, users on one side of the platform will tend to favor the most popular platforms that offer the largest number of users on the other side of the platform. (There are partial exceptions to this rule when users value being matched with certain typesof other users, rather than just with more users.) These “network effects” mean that any successful platform market will always converge toward a handful of winners. This positive feedback effect drives investors’ exuberance and regulators’ concerns.

There is a critical point, however, that often seems to be overlooked.

Market share only translates into market power to the extent the incumbent is protected against entry within some reasonable time horizon.  If Warren Buffett’s moat requirement is not met, market share is immaterial. If XYZ.com owns 100% of the online pet food delivery market but entry costs are asymptotic, then market power is negligible. There is another important limiting principle. In platform markets, the depth of the moat depends not only on competitors’ costs to enter the market, but users’ costs in switching from one platform to another or alternating between multiple platforms. If users can easily hop across platforms, then market share cannot confer market power given the continuous threat of user defection. Put differently: churn limits power over price.

Contrary to natural intuitions, this is why a platform market consisting of only a few leaders can still be intensely competitive, keeping prices low (down to and including $0) even if the number of competitors is low. It is often asserted, however, that users are typically locked into the dominant platform and therefore face high switching costs, which therefore implicitly satisfies the moat requirement. If that is true, then the “high churn” scenario is a theoretical curiosity and a leading platform’s high market share would be a reliable signal of market power. In fact, this common assumption likely describes the atypical case. 

AWS and the Cloud Data-Storage Market

This point can be illustrated by considering the cloud data-storage market. This would appear to be an easy case where high switching costs (due to the difficulty in shifting data among storage providers) insulate the market leader against entry threats. Yet the real world does not conform to these expectations. 

While Amazon Web Services pioneered the $100 billion-plus market and is still the clear market leader, it now faces vigorous competition from Microsoft Azure, Google Cloud, and other data-storage or other cloud-related services. This may reflect the fact that the data storage market is far from saturated, so new users are up for grabs and existing customers can mitigate lock-in by diversifying across multiple storage providers. Or it may reflect the fact that the market’s structure is fluid as a function of technological changes, enabling entry at formerly bundled portions of the cloud data-services package. While it is not always technologically feasible, the cloud storage market suggests that users’ resistance to platform capture can represent a competitive opportunity for entrants to challenge dominant vendors on price, quality, and innovation parameters.

The Surprising Instability of Platform Dominance

The instability of leadership positions in the cloud storage market is not exceptional. 

Consider a handful of once-powerful platforms that were rapidly dethroned once challenged by a more efficient or innovative rival: Yahoo and Alta Vista in the search-engine market (displaced by Google); Netscape in the browser market (displaced by Microsoft’s Internet Explorer, then displaced by Google Chrome); Nokia and then BlackBerry in the mobile wireless-device market (displaced by Apple and Samsung); and Friendster in the social-networking market (displaced by Myspace, then displaced by Facebook). AOL was once thought to be indomitable; now it is mostly referenced as a vintage email address. The list could go on.

Overestimating platform dominance—or more precisely, assuming platform dominance without close factual inquiry—matters because it promotes overestimates of market power. That, in turn, cultivates both market and regulatory bubbles: investors inflate stock valuations while regulators inflate the risk of competitive harm. 

DoorDash and the Food-Delivery Services Market

Consider the DoorDash IPO that launched in early December 2020. The market’s current approximately $50 billion valuation of a business that has been almost consistently unprofitable implicitly assumes that DoorDash will maintain and expand its position as the largest U.S. food-delivery platform, which will then yield power over price and exceptional returns for investors. 

There are reasons to be skeptical. Even where DoorDash captures and holds a dominant market share in certain metropolitan areas, it still faces actual and potential competition from other food-delivery services, in-house delivery services (especially by well-resourced national chains), and grocery and other delivery services already offered by regional and national providers. There is already evidence of these expected responses to DoorDash’s perceived high delivery fees, a classic illustration of the disciplinary effect of competitive forces on the pricing choices of an apparently dominant market leader. These “supply-side” constraints imposed by competitors are compounded by “demand-side” constraints imposed by customers. Home diners incur no more than minimal costs when swiping across food-delivery icons on a smartphone interface, casting doubt that high market share is likely to translate in this context into market power.

Deliveroo and the Costs of Regulatory Autopilot

Just as the stock market can suffer from delusions of platform grandeur, so too some competition regulators appear to have fallen prey to the same malady. 

A vivid illustration is provided by the 2019 decision by the Competition Markets Authority (CMA), the British competition regulator, to challenge Amazon’s purchase of a 16% stake in Deliveroo, one of three major competitors in the British food-delivery services market. This intervention provides perhaps the clearest illustration of policy action based on a reflexive assumption of market power, even in the face of little to no indication that the predicate conditions for that assumption could plausibly be satisfied.

Far from being a dominant platform, Deliveroo was (and is) a money-losing venture lagging behind money-losing Just Eat (now Just Eat Takeaway) and Uber Eats in the U.K. food-delivery services market. Even Amazon had previously closed its own food-delivery service in the U.K. due to lack of profitability. Despite Deliveroo’s distressed economic circumstances and the implausibility of any market power arising from Amazon’s investment, the CMA nonetheless elected to pursue the fullest level of investigation. While the transaction was ultimately approved in August 2020, this intervention imposed a 15-month delay and associated costs in connection with an investment that almost certainly bolstered competition in a concentrated market by funding a firm reportedly at risk of insolvency.  This is the equivalent of a competition regulator driving in reverse.

Concluding Thoughts

There seems to be an increasingly common assumption in commentary by the press, policymakers, and even some scholars that apparently dominant platforms usually face little competition and can set, at will, the terms of exchange. For investors, this is a reason to buy; for regulators, this is a reason to intervene. This assumption is sometimes realized, and, in that case, antitrust intervention is appropriate whenever there is reasonable evidence that market power is being secured through something other than “competition on the merits.” However, several conditions must be met to support the market power assumption without which any such inquiry would be imprudent. Contrary to conventional wisdom, the economics and history of platform markets suggest that those conditions are infrequently satisfied.

Without closer scrutiny, reflexively equating market share with market power is prone to lead both investors and regulators astray.  

The Federal Trade Commission and 46 state attorneys general (along with the District of Columbia and the Territory of Guam) filed their long-awaited complaints against Facebook Dec. 9. The crux of the arguments in both lawsuits is that Facebook pursued a series of acquisitions over the past decade that aimed to cement its prominent position in the “personal social media networking” market. 

Make no mistake, if successfully prosecuted, these cases would represent one of the most fundamental shifts in antitrust law since passage of the Hart-Scott-Rodino Act in 1976. That law required antitrust authorities to be notified of proposed mergers and acquisitions that exceed certain value thresholds, essentially shifting the paradigm for merger enforcement from ex-post to ex-ante review.

While the prevailing paradigm does not explicitly preclude antitrust enforcers from taking a second bite of the apple via ex-post enforcement, it has created an assumption among that regulatory clearance of a merger makes subsequent antitrust proceedings extremely unlikely. 

Indeed, the very point of ex-ante merger regulations is that ex-post enforcement, notably in the form of breakups, has tremendous social costs. It can scupper economies of scale and network effects on which both consumers and firms have come to rely. Moreover, the threat of costly subsequent legal proceedings will hang over firms’ pre- and post-merger investment decisions, and may thus reduce incentives to invest.

With their complaints, the FTC and state AGs threaten to undo this status quo. Even if current antitrust law allows it, pursuing this course of action threatens to quash the implicit assumption that regulatory clearance generally shields a merger from future antitrust scrutiny. Ex-post review of mergers and acquisitions does also entail some positive features, but the Facebook complaints fail to consider these complicated trade-offs. This oversight could hamper tech and other U.S. industries.

Mergers and uncertainty

Merger decisions are probabilistic. Of the thousands of corporate acquisitions each year, only a handful end up deemed “successful.” These relatively few success stories have to pay for the duds in order to preserve the incentive to invest.

Switching from ex-ante to ex-post review enables authorities to focus their attention on the most lucrative deals. It stands to reason that they will not want to launch ex-post antitrust proceedings against bankrupt firms whose assets have already been stripped. Instead, as with the Facebook complaint, authorities are far more likely to pursue high-profile cases that boost their political capital.

This would be unproblematic if:

  1. Authorities would commit to ex-post prosecution only of anticompetitive mergers; and
  2. If parties could reasonably anticipate whether their deals would be deemed anticompetitive in the future. 

If those were the conditions, ex-post enforcement would merely reduce the incentive to partake in problematic mergers. It would leave welfare-enhancing deals unscathed. But where firms could not have ex-ante knowledge that a given deal would be deemed anticompetitive, the associated error-costs should weigh against prosecuting such mergers ex post, even if such enforcement might appear desirable. The deterrent effect that would arise from such prosecutions would be applied by the market to all mergers, including efficient ones. Put differently, authorities might get the ex-post assessment right in one case, such as the Facebook proceedings, but the bigger picture remains that they could be wrong in many other cases. Firms will perceive this threat and it may hinder their investments.

There is also reason to doubt that either of the ideal conditions for ex-post enforcement could realistically be met in practice.Ex-ante merger proceedings involve significant uncertainty. Indeed, antitrust-merger clearance decisions routinely have an impact on the merging parties’ stock prices. If management and investors knew whether their transactions would be cleared, those effects would be priced-in when a deal is announced, not when it is cleared or blocked. Indeed, if firms knew a given merger would be blocked, they would not waste their resources pursuing it. This demonstrates that ex-ante merger proceedings involve uncertainty for the merging parties.

Unless the answer is markedly different for ex-post merger reviews, authorities should proceed with caution. If parties cannot properly self-assess their deals, the threat of ex-post proceedings will weigh on pre- and post-merger investments (a breakup effectively amounts to expropriating investments that are dependent upon the divested assets). 

Furthermore, because authorities will likely focus ex-post reviews on the most lucrative deals, their incentive effects can be particularly pronounced. Parties may fear that the most successful mergers will be broken up. This could have wide-reaching effects for all merging firms that do not know whether they might become “the next Facebook.” 

Accordingly, for ex-post merger reviews to be justified, it is essential that:

  1. Their outcomes be predictable for the parties; and that 
  2. Analyzing the deals after the fact leads to better decision-making (fewer false acquittals and convictions) than ex-ante reviews would yield.

If these conditions are not in place, ex-post assessments will needlessly weigh down innovation, investment and procompetitive merger activity in the economy.

Hindsight does not disentangle efficiency from market power

So, could ex-post merger reviews be so predictable and effective as to alleviate the uncertainties described above, along with the costs they entail? 

Based on the recently filed Facebook complaints, the answer appears to be no. We simply do not know what the counterfactual to Facebook’s acquisitions of Instagram and WhatsApp would look like. Hindsight does not tell us whether Facebook’s acquisitions led to efficiencies that allowed it to thrive (a pro-competitive scenario), or whether Facebook merely used these deals to kill off competitors and maintain its monopoly (an anticompetitive scenario).

As Sam Bowman and I have argued elsewhere, when discussing the leaked emails that spurred the current proceedings and on which the complaints rely heavily:

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically.

In fact, contrary to what some have argued, hindsight might even complicate matters (again from Sam and me):

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

In other words, ex-post reviews will, by definition, focus on mergers where today’s outcomes seem preordained — when, in fact, they were probabilistic. This will skew decisions toward finding anticompetitive conduct. If authorities think that Instagram was destined to become great, they are more likely to find that Facebook’s acquisition was anticompetitive because they implicitly dismiss the idea that it was the merger itself that made Instagram great.

Authorities might also confuse correlation for causality. For instance, the state AGs’ complaint ties Facebook’s acquisitions of Instagram and WhatsApp to the degradation of these services, notably in terms of privacy and advertising loads. As the complaint lays out:

127. Following the acquisition, Facebook also degraded Instagram users’ privacy by matching Instagram and Facebook Blue accounts so that Facebook could use information that users had shared with Facebook Blue to serve ads to those users on Instagram. 

180. Facebook’s acquisition of WhatsApp thus substantially lessened competition […]. Moreover, Facebook’s subsequent degradation of the acquired firm’s privacy features reduced consumer choice by eliminating a viable, competitive, privacy-focused option

But these changes may have nothing to do with Facebook’s acquisition of these services. At the time, nearly all tech startups focused on growth over profits in their formative years. It should be no surprise that the platforms imposed higher “prices” to users after their acquisition by Facebook; they were maturing. Further monetizing their platform would have been the logical next step, even absent the mergers.

It is just as hard to determine whether post-merger developments actually harmed consumers. For example, the FTC complaint argues that Facebook stopped developing its own photo-sharing capabilities after the Instagram acquisition,which the commission cites as evidence that the deal neutralized a competitor:

98. Less than two weeks after the acquisition was announced, Mr. Zuckerberg suggested canceling or scaling back investment in Facebook’s own mobile photo app as a direct result of the Instagram deal.

But it is not obvious that Facebook or consumers would have gained anything from the duplication of R&D efforts if Facebook continued to develop its own photo-sharing app. More importantly, this discontinuation is not evidence that Instagram could have overthrown Facebook. In other words, the fact that Instagram provided better photo-sharing capabilities does necessarily imply that it could also provide a versatile platform that posed a threat to Facebook.

Finally, if Instagram’s stellar growth and photo-sharing capabilities were certain to overthrow Facebook’s monopoly, why do the plaintiffs ignore the competitive threat posed by the likes of TikTok today? Neither of the complaints makes any mention of TikTok,even though it currently has well over 1 billion monthly active users. The FTC and state AGs would have us believe that Instagram posed an existential threat to Facebook in 2012 but that Facebook faces no such threat from TikTok today. It is exceedingly unlikely that both these statements could be true, yet both are essential to the plaintiffs’ case.

Some appropriate responses

None of this is to say that ex-post review of mergers and acquisitions should be categorically out of the question. Rather, such proceedings should be initiated only with appropriate caution and consideration for their broader consequences.

When undertaking reviews of past mergers, authorities do  not necessarily need to impose remedies every time they find a merger was wrongly cleared. The findings of these ex-post reviews could simply be used to adjust existing merger thresholds and presumptions. This would effectively create a feedback loop where false acquittals lead to meaningful policy reforms in the future. 

At the very least, it may be appropriate for policymakers to set a higher bar for findings of anticompetitive harm and imposition of remedies in such cases. This would reduce the undesirable deterrent effects that such reviews may otherwise entail, while reserving ex-post remedies for the most problematic cases.

Finally, a tougher system of ex-post review could be used to allow authorities to take more risks during ex-ante proceedings. Indeed, when in doubt, they could effectively  experiment by allowing  marginal mergers to proceed, with the understanding that bad decisions could be clawed back afterwards. In that regard, it might also be useful to set precise deadlines for such reviews and to outline the types of concerns that might prompt scrutiny  or warrant divestitures.

In short, some form of ex-post review may well be desirable. It could help antitrust authorities to learn what works and subsequently to make useful changes to ex-ante merger-review systems. But this would necessitate deep reflection on the many ramifications of ex-post reassessments. Legislative reform or, at the least, publication of guidance documents by authorities, seem like essential first steps. 

Unfortunately, this is the exact opposite of what the Facebook proceedings would achieve. Plaintiffs have chosen to ignore these complex trade-offs in pursuit of a case with extremely dubious underlying merits. Success for the plaintiffs would thus prove a pyrrhic victory, destroying far more than it intends to achieve.

The writing is on the wall for Big Tech: regulation is coming. At least, that is what the House Judiciary Committee’s report into competition in digital markets would like us to believe. 

The Subcommittee’s Majority members, led by Rhode Island’s Rep. David Cicilline, are calling for a complete overhaul of America’s antitrust and regulatory apparatus. This would notably entail a break up of America’s largest tech firms, by prohibiting them from operating digital platforms and competing on them at the same time. Unfortunately, the report ignores the tremendous costs that such proposals would impose upon consumers and companies alike. 

For several years now, there has been growing pushback against the perceived “unfairness” of America’s tech industry: of large tech platforms favoring their own products at the expense of entrepreneurs who use their platforms; of incumbents acquiring startups to quash competition; of platforms overcharging  companies like Epic Games, Spotify, and the media, just because they can; and of tech companies that spy on their users and use that data to sell them things they don’t need. 

But this portrayal of America’s tech industry obscures an inconvenient possibility: supposing that these perceived ills even occur, there is every chance that the House’s reforms would merely exacerbate the status quo. The House report gives short shrift to this eventuality, but it should not.

Over the last decade, the tech sector has been the crown jewel of America’s economy. And while firms like Amazon, Google, Facebook, and Apple, may have grown at a blistering pace, countless others have flourished in their wake.

Google and Apple’s app stores have given rise to a booming mobile software industry. Platforms like Youtube and Instagram have created new venues for advertisers and ushered in a new generation of entrepreneurs including influencers, podcasters, and marketing experts. Social media platforms like Facebook and Twitter have disintermediated the production of news media, allowing ever more people to share their ideas with the rest of the world (mostly for better, and sometimes for worse). Amazon has opened up new markets for thousands of retailers, some of which are now going public. The recent $3.4 billion Snowflake IPO may have been the biggest public offering of a tech firm no one has heard of.

The trillion dollar question is whether it is possible to regulate this thriving industry without stifling its unparalleled dynamism. If Rep. Cicilline’s House report is anything to go by, the answer is a resounding no.

Acquisition by a Big Tech firm is one way for startups to rapidly scale and reach a wider audience, while allowing early investors to make a quick exit. Self-preferencing can enable platforms to tailor their services to the needs and desires of users (Apple and Google’s pre-installed app suites are arguably what drive users to opt for their devices). Excluding bad apples from a platform is essential to gain users’ trust and build a strong reputation. Finally, in the online retail space, copying rival products via house brands provides consumers with competitively priced goods and helps new distributors enter the market. 

All of these practices would either be heavily scrutinized or outright banned under the Subcommittee ’s proposed reforms. Beyond its direct impact on the quality of online goods and services, this huge shift would threaten the climate of permissionless innovation that has arguably been key to Silicon Valley’s success. 

More fundamentally, these reforms would mostly protect certain privileged rivals at the expense of the wider industry. Take Apple’s App Store: Epic Games and others have complained about the 30% Commission charged by Apple for in-app purchases (as is standard throughout the industry). Yet, as things stand, roughly 80% of apps pay no commission at all. Tackling this 30% commission — for instance by allowing developers to bypass Apple’s in-app payment processing — would almost certainly result in larger fees for small developers. In short, regulation could significantly impede smaller firms.

Fortunately, there is another way. For decades, antitrust law — guided by the judge-made consumer welfare standard — has been the cornerstone of economic policy in the US. During that time, America built a tech industry that is the envy of the world. This should give pause to would-be reformers. There is a real chance overbearing regulation will permanently hamper America’s tech industry. With competition from China more intense than ever, it is a risk that the US cannot afford to take.

This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

A boy throws a brick through a bakeshop window. He flees and is never identified. The townspeople gather around the broken glass. “Well,” one of them says to the furious baker, “at least this will generate some business for the windowmaker!”

A reasonable statement? Not really. Although it is indeed a good day for the windowmaker, the money for the new window comes from the baker. Perhaps the baker was planning to use that money to buy a new suit. Now, instead of owning a window and a suit, he owns only a window. The windowmaker’s gain, meanwhile, is simply the tailor’s loss.

This parable of the broken window was conceived by Frédéric Bastiat, a nineteenth-century French economist. He wanted to alert the reader to the importance of opportunity costs—in his words, “that which is not seen.” Time and money spent on one activity cannot be spent on another.

Today Bastiat might tell the parable of the harassed technology company. A tech firm creates a revolutionary new product or service and grows very large. Rivals, lawyers, activists, and politicians call for an antitrust probe. Eventually they get their way. Millions of documents are produced, dozens of depositions are taken, and several hearings are held. In the end no concrete action is taken. “Well,” the critics say, “at least other companies could grow while the firm was sidetracked by the investigation!”

Consider the antitrust case against Microsoft twenty years ago. The case ultimately settled, and Microsoft agreed merely to modify minor aspects of how it sold its products. “It’s worth wondering,” writes Brian McCullough, a generally astute historian of the internet, “how much the flowering of the dot-com era was enabled by the fact that the most dominant, rapacious player in the industry was distracted while the new era was taking shape.” “It’s easy to see,” McCullough says, “that the antitrust trial hobbled Microsoft strategically, and maybe even creatively.”

Should we really be glad that an antitrust dispute “distracted” and “hobbled” Microsoft? What would a focused and unfettered Microsoft have achieved? Maybe nothing; incumbents often grow complacent. Then again, Microsoft might have developed a great search engine or social-media platform. Or it might have invented something that, thanks to the lawsuit, remains absent to this day. What Microsoft would have created in the early 2000s, had it not had to fight the government, is that which is not seen.

But doesn’t obstructing the most successful companies create “room” for new competitors? David Cicilline, the chairman of the House’s antitrust subcommittee, argues that “just pursuing the [Microsoft] enforcement action itself” made “space for an enormous amount of additional innovation and competition.” He contends that the large tech firms seek to buy promising startups before they become full-grown threats, and that such purchases must be blocked.

It’s easy stuff to say. It’s not at all clear that it’s true or that it makes sense. Hindsight bias is rampant. In 2012, for example, Facebook bought Instagram for $1 billion, a purchase that is now cited as a quintessential “killer acquisition.” At the time of the sale, however, Instagram had 27 million users and $0 in revenue. Today it has around a billion users, it is estimated to generate $7 billion in revenue each quarter, and it is worth perhaps $100 billion. It is presumptuous to declare that Instagram, which had only 13 employees in 2012, could have achieved this success on its own.

If distraction is an end in itself, last week’s Big Tech hearing before Cicilline and his subcommittee was a smashing success. Presumably Jeff Bezos, Tim Cook, Sundar Pichai, and Mark Zuckerberg would like to spend the balance of their time developing the next big innovations and staying ahead of smart, capable, ruthless competitors, starting with each other and including foreign firms such as ByteDance and Huawei. Last week they had to put their aspirations aside to prepare for and attend five hours of political theater.

The most common form of exchange at the hearing ran as follows. A representative asks a slanted question. The witness begins to articulate a response. The representative cuts the witness off. The representative gives a prepared speech about how the witness’s answer proved her point.

Lucy Kay McBath, a first-term congresswoman from Georgia, began one such drill with the claim that Facebook’s privacy policy from 2004, when Zuckerberg was 20 and Facebook had under a million users, applies in perpetuity. “We do not and will not use cookies to collect private information from any users,” it said. Has Facebook broken its “promise,” McBath asked, not to use cookies to collect private information? No, Zuckerberg explained (letting the question’s shaky premise slide), Facebook uses only standard log-in cookies.

“So once again, you do not use cookies? Yes or no?” McBath interjected. Having now asked a completely different question, and gotten a response resembling what she wanted—“Yes, we use cookies [on log-in features]”—McBath could launch into her canned condemnation. “The bottom line here,” she said, reading from her page, “is that you broke a commitment to your users. And who can say whether you may or may not do that again in the future?” The representative pressed on with her performance, not noticing or not caring that the person she was pretending to engage with had upset her script.

Many of the antitrust subcommittee’s queries had nothing to do with antitrust. One representative fixated on Amazon’s ties with the Southern Poverty Law Center. Another seemed to want Facebook to interrogate job applicants about their political beliefs. A third asked Zuckerberg to answer for the conduct of Twitter. One representative demanded that social-media posts about unproven Covid-19 treatments be left up, another that they be taken down. Most of the questions that were at least vaguely on topic, meanwhile, were exceedingly weak. The representatives often mistook emails showing that tech CEOs play to win, that they seek to outcompete challengers and rivals, for evidence of anticompetitive harm to consumers. And the panel was often treated like a customer-service hotline. This app developer ran into a difficulty; what say you, Mr. Cook? That third-party seller has a gripe; why won’t you listen to her, Mr. Bezos?

In his opening remarks, Bezos cited a survey that ranked Amazon one of the country’s most trusted institutions. No surprise there. In many places one could have ordered a grocery delivery from Amazon as the hearing started and had the goods put away before it ended. Was Bezos taking a muted dig at Congress? He had every right to—it is one of America’s least trusted institutions. Pichai, for his part, noted that many users would be willing to pay thousands of dollars a year for Google’s free products. Is Congress providing people that kind of value?

The advance of technology will never be an unalloyed blessing. There are legitimate concerns, for instance, about how social-media platforms affect public discourse. “Human beings evolved to gossip, preen, manipulate, and ostracize,” psychologist Jonathan Haidt and technologist Tobias Rose-Stockwell observe. Social media exploits these tendencies, they contend, by rewarding those who trade in the glib put-down, the smug pronouncement, the theatrical smear. Speakers become “cruel and shallow”; “nuance and truth” become “casualties in [a] competition to gain the approval of [an] audience.”

Three things are true at once. First, Haidt and Rose-Stockwell have a point. Second, their point goes only so far. Social media does not force people to behave badly. Assuming otherwise lets individual humans off too easy. Indeed, it deprives them of agency. If you think it is within your power to display grace, love, and transcendence, you owe it to others to think it is within their power as well.

Third, if you really want to see adults act like children, watch a high-profile congressional hearing. A hearing for Attorney General William Barr, held the day before the Big Tech hearing and attended by many of the same representatives, was a classic of the format.

The tech hearing was not as shambolic as the Barr hearing. And the representatives act like sanctimonious halfwits in part to concoct the sick burns that attract clicks on the very platforms built, facilitated, and delivered by the tech companies. For these and other obvious reasons, no one should feel sorry for the four men who spent a Wednesday afternoon serving as props for demagogues. But that doesn’t mean the charade was a productive use of time. There is always that which is not seen.

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]


The ongoing pandemic has been an opportunity to explore different aspects of the human condition. For myself, I have learned that, despite a deep commitment to philosophical (neo- or classical-) liberalism, at heart I am pragmatic. I would prefer a society that optimizes for more individual liberty, but I am emphatically not someone who would even entertain the idea of using crises to advance my agenda when it is not clearly in service to amelioration of immediate problems.

Sadly, I have also learned that there are those who are not similarly pragmatic, and are willing to advance their ideological agenda come hell or high water. In this regard, I was disappointed yesterday to see the Gurry IP/COVID Letter passing around Twitter calling for widespread, worldwide interference with the property rights of IPR holders. 

The letter calls for a scattershot set of “remedies” to the crisis that would open access to copyright- and patent-protected inventions and content, including (among other things): 

  • voluntary licensing and non-enforcement of IP;
  • abrogation of IPR by WIPO members using the  “flexibility” in the international IP regime; 
  • the removal of geographical restrictions on IP licenses;
  • forcing patents into COVID-19 patent pools; and 
  • the implementation of compulsory licensing. 

And, unlike many prior efforts to push the envelope on weakening IP protections, the Gurry Letter also calls for measures that would weaken trade secrets and expose confidential business information in order to “achieve universal and equitable access to COVID-19 medicines and medical technologies as soon as reasonably possible.”

Notably, nothing in the letter suggests that any of these measures should be regarded as temporary.

We all want treatments for infection, vaccines for prevention, and ample supply of personal protective equipment as soon as possible, but if all the demands in this letter were met, it would do little to increase the supply of any of these things in the short term, while undermining incentives to develop new treatments, vaccines and better preventative tools in the long run. 

Fundamentally, the letter  reflects a willingness to use the COVID-19 pandemic to pursue an agenda that lacks merit and would be dismissed in the normal course of affairs. 

What is most certainly the case is that we need more innovation now, and we need it faster. There is no reason to believe that mandating open source status or forcing compulsory licensing on the firms doing that work will encourage that work to proceed with all due haste—and every indication that the opposite is the case. 

Where there are short term shortages of certain products that might be produced in much larger quantities by relaxing IP, companies are responding by doing just that—voluntarily. But this is fundamentally different from the imposition of unlimited compulsory licenses.

Further, private actors have displayed an impressive willingness to provide free or low cost access to technologies and content—without government coercion. The following is a short list of some of the content and inventions that have been opened up:

Culture, Fitness & Entertainment

  • HBO Will Stream 500 Hours of Free Programming, Including Full Seasons of ‘Veep,’ ‘The Sopranos,’ ‘Silicon Valley’”
  • Dozens (or more) of artists, both famous and lesser known, are releasing free back catalog performances or are taking part in free live streaming sessions on social media platforms. Notably, viewers are often welcome to donate or “pay what they” want to help support these artists (more on this below).
  • The NBA, NFL, and NHL are offering free access to their back catalogue of games.
  • A large array of music production software can now be used free on extended trials for 3 months (or completely free and unlimited in some cases). 
  • CBS All Access expanded its free trial period.
  • Neil Gaiman and Harper Collins granted permission to Levar Burton to livestream readings from their catalogs.
  • Disney is releasing movies early onto its (paid) Disney+ services.
  • Gold’s Gym is providing free access to its app-based workouts.
  • The Met is streaming free recordings of its Live in HD series.
  • The Seattle Symphony is offering free access to some of its recorded performances.
  • The UK National Theater is streaming some of its most popular plays for free.
  • Andrew Lloyd Weber is streaming his shows online for free.

Science, News & Education

  • Scholastica released free content intended to help educate students stuck at home while sheltering-in-place. 
  • Nearly 100 academic journals, societies, institutes, and companies signed a commitment to make research and data on COVID-19 freely available, at least for the duration of the outbreak.
  • The Atlantic lifted paywall restrictions on access to its COVID-19-related content.
  • The New England Journal of Medicine is allowing free access to COVID-19-related resources.
  • The Lancet allows free access to research it publishes on COVID-19.
  • All material published by theBMJ on the coronavirus outbreak is freely available.
  • The AAAS-published Science allows free access to its coronavirus research and commentary.
  • Elsevier gave full access to its content on its COVID-19 Information Center for PubMed Central and other public health databases.
  • The American Economic Association announced open access to all of its journals until the end of June.
  • JSTOR expanded free access to some of its scholarship.

Medicine & Technology

  • The Global Center for Medical Design is developing license-free PPE designs that can be quickly implemented by manufacturers.
  • Medtronic published “design specifications for the Puritan Bennett 560 (PB560) to allow innovators, inventors, start-ups, and academic institutions to leverage their own expertise and resources to evaluate options for rapid ventilator manufacturing.” It additionally provided software licenses for this technology.
  • AbbVie announced it won’t enforce its patent rights for Kaletra—a drug that may provide treatment for COVID-19 infections. Israel had earlier indicated it would impose compulsory licenses for the drug, but AbbVie is allowing use worldwide. The company, moreover, had donated supplies of the drug to China earlier in the year when the outbreak first became apparent.
  • Google is working with health researchers to provide anonymized and aggregated user location data. 
  • Cisco has extended free licenses and expanded usage counts at no extra charge for three of its security technologies to help strained IT teams and partners ready themselves and their clients for remote work.”
  • Microsoft is offering free subscriptions to its Teams product for six months.
  • Zoom expanded its free access and other limitations for educational institutions around the world.

Incentivize innovation, now more than ever

In addition to undermining the short-term incentives to draw more research resources into the fight against COVID-19, using this crisis to weaken the IP regime will cause long-term damage to the economies of the world. We still will need creators making new cultural products and researchers developing new medicines and technologies; weakening the IP regime will undermine the delicate set of incentives that cultural and scientific production depends upon. 

Any clear-eyed assessment of the broader course of the pandemic and the response to it gives lie to the notion that IP rights are oppressive or counterproductive. It is the pharmaceutical industry—hated as they may be in some quarters—that will be able to marshall the resources and expertise to develop treatments and vaccines. And it is artists and educators producing cultural content who (theoretically) depend on the licensing revenues of their creations for survival. 

In fact, one of the things that the pandemic has exposed is the fragility of artists’ livelihoods and the callousness with which they are often treated. Shortly after the lockdowns began in the US, the well-established rock musician David Crosby said in an interview that, if he could not tour this year, he would face tremendous financial hardship. 

As unfortunate as that may be for Crosby, a world-famous musician, imagine how much harder it is for struggling musicians who can hardly hope to achieve a fraction of Crosby’s success for their own tours, let alone for licensing. If David Crosby cannot manage well for a few months on the revenue from his popular catalog, what hope do small artists have?

Indeed, the flood of unable-to-tour artists who are currently offering “donate what you can” streaming performances are a symptom of the destructive assault on IPR exemplified in the letter. For decades, these artists have been told that they can only legitimately make money through touring. Although the potential to actually make a living while touring is possibly out of reach for many or most artists,  those that had been scraping by have now been brought to the brink of ruin as the ability to tour is taken away. 

There are certainly ways the various IP regimes can be improved (like, for instance, figuring out how to help creators make a living from their creations), but now is not the time to implement wishlist changes to an otherwise broadly successful rights regime. 

And, critically, there is a massive difference between achieving wider distribution of intellectual property voluntarily as opposed to through government fiat. When done voluntarily the IP owner determines the contours and extent of “open sourcing” so she can tailor increased access to her own needs (including the need to eat and pay rent). In some cases this may mean providing unlimited, completely free access, but in other cases—where the particular inventor or creator has a different set of needs and priorities—it may be something less than completely open access. When a rightsholder opts to “open source” her property voluntarily, she still retains the right to govern future use (i.e. once the pandemic is over) and is able to plan for reductions in revenue and how to manage future return on investment. 

Our lawmakers can consider if a particular situation arises where a particular piece of property is required for the public good, should the need arise. Otherwise, as responsible individuals, we should restrain ourselves from trying to capitalize on the current crisis to ram through our policy preferences. 

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.