Archives For agriculture

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

In the wake of its departure from the European Union, the United Kingdom will have the opportunity to enter into new free trade agreements (FTAs) with its international trading partners that lower existing tariff and non-tariff barriers. Achieving major welfare-enhancing reductions in trade restrictions will not be easy. Trade negotiations pose significant political sensitivities, such as those arising from the high levels of protection historically granted certain industry sectors, particularly agriculture.

Nevertheless, the political economy of protectionism suggests that, given deepening globalization and the sudden change in U.K. trade relations wrought by Brexit, the outlook for substantial liberalization of U.K. trade has become much brighter. Below, I address some of the key challenges facing U.K. trade negotiators as they seek welfare-enhancing improvements in trade relations and offer a proposal to deal with novel trade distortions in the least protectionist manner.

Two New Challenges Affecting Trade Liberalization

In addition to traditional trade issues, such as tariff levels and industry sector-specific details, U.K, trade negotiators—indeed, trade negotiators from all nations—will have to confront two relatively new and major challenges that are creating several frictions.

First, behind-the-border anticompetitive market distortions (ACMDs) have largely replaced tariffs as the preferred means of protection in many areas. As I explained in a previous post on this site (citing an article by trade-law scholar Shanker Singham and me), existing trade and competition law have not been designed to address the ACMD problem:

[I]nternational trade agreements simply do not reach a variety of anticompetitive welfare-reducing government measures that create de facto trade barriers by favoring domestic interests over foreign competitors. Moreover, many of these restraints are not in place to discriminate against foreign entities, but rather exist to promote certain favored firms. We dub these restrictions “anticompetitive market distortions” or “ACMDs,” in that they involve government actions that empower certain private interests to obtain or retain artificial competitive advantages over their rivals, be they foreign or domestic. ACMDs are often a manifestation of cronyism, by which politically-connected enterprises successfully pressure government to shield them from effective competition, to the detriment of overall economic growth and welfare. …

As we emphasize in our article, existing international trade rules have been able to reach ACMDs, which include: (1) governmental restraints that distort markets and lessen competition; and (2) anticompetitive private arrangements that are backed by government actions, have substantial effects on trade outside the jurisdiction that imposes the restrictions, and are not readily susceptible to domestic competition law challenge. Among the most pernicious ACMDs are those that artificially alter the cost-base as between competing firms. Such cost changes will have large and immediate effects on market shares, and therefore on international trade flows.

Second, in recent years, the trade remit has expanded to include “nontraditional” issues such as labor, the environment, and now climate change. These concerns have generated support for novel tariffs that could help promote protectionism and harmful trade distortions. As explained in a recent article by the Special Trade Commission advisory group (former senior trade and antitrust officials who have provided independent policy advice to the U.K. government):

[The rise of nontraditional trade issues] has renewed calls for border tax adjustments or dual tariffs on an ex-ante basis. This is in sharp tension with the W[orld Trade Organization’s] long-standing principle of technological neutrality, and focus on outcomes as opposed to discriminating on the basis of the manner of production of the product. The problem is that it is too easy to hide protectionist impulses into concerns about the manner of production, and once a different tariff applies, it will be very difficult to remove. The result will be to significantly damage the liberalisation process itself leading to severe harm to the global economy at a critical time as we recover from Covid-19. The potentially damaging effects of ex ante tariffs will be visited most significantly in developing countries.

Dealing with New Trade Challenges in the Least Protectionist Manner

A broad approach to U.K. trade liberalization that also addresses the two new trade challenges is advanced in a March 2 report by the U.K. government’s Trade and Agricultural Commission (TAC, an independent advisory agency established in 2020). Although addressed primarily to agricultural trade, the TAC report enunciates principles applicable to U.K. trade policy in general, considering the impact of ACMDs and nontraditional issues. Key aspects of the TAC report are summarized in an article by Shanker Singham (the scholar who organized and convened the Special Trade Commission and who also served as a TAC commissioner):

The heart of the TAC report’s import policy contains an innovative proposal that attempts to simultaneously promote a trade liberalising agenda in agriculture, while at the same time protecting the UK’s high standards in food production and ensuring the UK fully complies with WTO rules on animal and plant health, as well as technical regulations that apply to food trade.

This proposal includes a mechanism to deal with some of the most difficult issues in agricultural trade which relate to animal welfare, environment and labour rules. The heart of this mechanism is the potential for the application of a tariff in cases where an aggrieved party can show that a trading partner is violating agreed standards in an FTA.

The result of the mechanism is a tariff based on the scale of the distortion which operates like a trade remedy. The mechanism can also be used offensively where a country is preventing market access by the UK as a result of the market distortion, or defensively where a distortion in a foreign market leads to excess exports from that market. …

[T]he tariff would be calibrated to the scale of the distortion and would apply only to the product category in which the distortion is occurring. The advantage of this over a more conventional trade remedy is that it is based on cost as opposed to price and is designed to remove the effects of the distorting activity. It would not be applied on a retaliatory basis in other unrelated sectors.

In exchange for this mechanism, the UK commits to trade liberalisation and, within a reasonable timeframe, zero tariffs and zero quotas. This in turn will make the UK’s advocacy of higher standards in international organisations much more credible, another core TAC proposal.

The TAC report also notes that behind the border barriers and anti-competitive market distortions (“ACMDs”) have the capacity to damage UK exports and therefore suggests a similar mechanism or set of disciplines could be used offensively. Certainly, where the ACMD is being used to protect a particular domestic industry, using the ACMD mechanism to apply a tariff for the exports of that industry would help, but this may not apply where the purpose is protective, and the industry does not export much.

I would argue that in this case, it would be important to ensure that UK FTAs include disciplines on these ACMDs which if breached could lead to dispute settlement and the potential for retaliatory tariffs for sectors in the UK’s FTA partner that do export. This is certainly normal WTO-sanctioned practice, and could be used here to encourage compliance. It is clear from the experience in dealing with countries that engage in ACMDs for trade or competition advantage that unless there are robust disciplines, mere hortatory language would accomplish little or nothing.

But this sort of mechanism with its concomitant commitment to freer trade has much wider potential application than just UK agricultural trade policy. It could also be used to solve a number of long standing trade disputes such as the US-China dispute, and indeed the most vexed questions in trade involving environment and climate change in ways that do not undermine the international trading system itself.

This is because the mechanism is based on an ex post tariff as opposed to an ex ante one which contains within it the potential for protectionism, and is prone to abuse. Because the tariff is actually calibrated to the cost advantage which is secured as a result of the violation of agreed international standards, it is much more likely that it will be simply limited to removing this cost advantage as opposed to becoming a punitive measure that curbs ordinary trade flows.

It is precisely this type of problem solving and innovative thinking that the international trading system needs as it faces a range of challenges that threaten liberalisation itself and the hard-won gains of the post war GATT/WTO system itself. The TAC report represents UK leadership that has been sought after since the decision to leave the EU. It has much to commend it.

Assessment and Conclusion

Even when administered by committed free traders, real-world trade liberalization is an exercise in welfare optimization, subject to constraints imposed by the actions of organized interest groups expressed through the political process. The rise of new coalitions (such as organizations committed to specified environmental goals, including limiting global warming) and the proliferation of ADMCs further complicates the trade negotiation calculus.

Fortunately, recognizing the “reform moment” created by Brexit, free trade-oriented experts (in particular, the TAC, supported by the Special Trade Commission) have recommended that the United Kingdom pursue a bold move toward zero tariffs and quotas. Narrow exceptions to this policy would involve after-the-fact tariffications to offset (1) the distortive effects of ACMDs and (2) derogation from rules embodying nontraditional concerns, such as environmental commitments. Such tariffications would be limited and cost-based, and, as such, welfare-superior to ex ante tariffs calibrated to price.

While the details need to be worked out, the general outlines of this approach represent a thoughtful and commendable market-oriented effort to secure substantial U.K. trade liberalization, subject to unavoidable constraints. More generally, one would hope that other jurisdictions (including the United States) take favorable note of this development as they generate their own trade negotiation policies. Stay tuned.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramsi Woodcock, (Assistant Professor of Law, University of Kentucky; Assistant Professor of Management, Gatton College of Business and Economics).]

Specialists know that the antitrust courses taught in law schools and economics departments have an alter ego in business curricula: the course on business strategy. The two courses cover the same material, but from opposite perspectives. Antitrust courses teach how to end monopolies; strategy courses teach how to construct and maintain them.

Strategy students go off and run businesses, and antitrust students go off and make government policy. That is probably the proper arrangement if the policy the antimonopolists make is domestic. We want the domestic economy to run efficiently, and so we want domestic policymakers to think about monopoly—and its allocative inefficiencies—as something to be discouraged.

The coronavirus, and the shortages it has caused, have shown us that putting the antimonopolists in charge of international policy is, by contrast, a very big mistake.

Because we do not yet have a world government. America’s position, in relation to the rest of the world, is therefore more akin to that of a business navigating a free market than it is to a government seeking to promote efficient interactions among the firms that it governs. To flourish, America must engage in international trade with a view to creating and maintaining monopoly positions for itself, rather than eschewing them in the interest of realizing efficiencies in the global economy. Which is to say: we need strategists, not antimonopolists.

For the global economy is not America, and there is no guarantee that competitive efficiencies will redound to America’s benefit, rather than to those of her competitors. Absent a world government, other countries will pursue monopoly regardless what America does, and unless America acts strategically to build and maintain economic power, America will eventually occupy a position of commercial weakness, with all of the consequences for national security that implies.

When Antimonopolists Make Trade Policy

The free traders who have run American economic policy for more than a generation are antimonopolists playing on a bigger stage. Like their counterparts in domestic policy, they are loyal in the first instance only to the efficiency of the market, not to any particular trader. They are content to establish rules of competitive trading—the antitrust laws in the domestic context, the World Trade Organization in the international context—and then to let the chips fall where they may, even if that means allowing present or future adversaries to, through legitimate means, build up competitive advantages that the United States is unable to overcome.

Strategy is consistent with competition when markets are filled with traders of atomic size, for then no amount of strategy can deliver a competitive advantage to any trader. But global markets, more even than domestic markets, are filled with traders of macroscopic size. Strategy then requires that each trader seek to gain and maintain advantages, undermining competition. The only way antimonopolists could induce the trading behemoth that is America to behave competitively, and to let the chips fall where they may, was to convince America voluntarily to give up strategy, to sacrifice self-interest on the altar of efficient markets.

And so they did.

Thus when the question arose whether to permit American corporations to move their manufacturing operations overseas, or to permit foreign companies to leverage their efficiencies to dominate a domestic industry and ensure that 90% of domestic supply would be imported from overseas, the answer the antimonopolists gave was: “yes.” Because it is efficient. Labor abroad is cheaper than labor at home, and transportation costs low, so efficiency requires that production move overseas, and our own resources be reallocated to more competitive uses.

This is the impeccable logic of static efficiency, of general equilibrium models allocating resources optimally. But it is instructive to recall that the men who perfected this model were not trying to describe a free market, much less international trade. They were trying to create a model that a central planner could use to allocate resources to a state’s subjects. What mattered to them in building the model was the good of the whole, not any particular part. And yet it is to a particular part of the global whole that the United States government is dedicated.

The Strategic Trader

Students of strategy would have taken a very different approach to international trade. Strategy teaches that markets are dynamic, and that businesses must make decisions based not only on the market signals that exist today, but on those that can be made to exist in the future. For the successful strategist, unlike the antimonopolist, identifying a product for which consumers are willing to pay the costs of production is not alone enough to justify bringing the product to market. The strategist must be able to secure a source of supply, or a distribution channel, that competitors cannot easily duplicate, before the strategist will enter.

Why? Because without an advantage in supply, or distribution, competitors will duplicate the product, compete away any markups, and leave the strategist no better off than if he had never undertaken the project at all. Indeed, he may be left bankrupt, if he has sunk costs that competition prevents him from recovering. Unlike the economist, the strategist is interested in survival, because he is a partisan of a part of the market—himself—not the market entire. The strategist understands that survival requires power, and all power rests, to a greater or lesser degree, on monopoly.

The strategist is not therefore a free trader in the international arena, at least not as a matter of principle. The strategist understands that trading from a position of strength can enrich, and trading from a position of weakness can impoverish. And to occupy that position of strength, America must, like any monopolist, control supply. Moreover, in the constantly-innovating markets that characterize industrial economies, markets in which innovation emerges from learning by doing, control over physical supply translates into control over the supply of inventions itself.

The strategist does not permit domestic corporations to offshore manufacturing in any market in which the strategist wishes to participate, because that is unsafe: foreign countries could use control over that supply to extract rents from America, to drive domestic firms to bankruptcy, and to gain control over the supply of inventions.

And, as the new trade theorists belatedly discovered, offshoring prevents the development of the dense, geographically-contiguous, supply networks that confer power over whole product categories, such as the electronics hub in Zhengzhou, where iPhone-maker Foxconn is located.

Or the pharmaceutical hub in Hubei.

Coronavirus and the Failure of Free Trade

Today, America is unprepared for the coming wave of coronavirus cases because the antimonopolists running our trade policy do not understand the importance of controlling supply. There is a shortage of masks, because China makes half of the world’s masks, and the Chinese have cut off supply, the state having forbidden even non-Chinese companies that offshored mask production from shipping home masks for which American customers have paid. Not only that, but in January China bought up most of the world’s existing supply of masks, with free-trade-obsessed governments standing idly by as the clock ticked down to their own domestic outbreaks.  

New York State, which lies at the epicenter of the crisis, has agreed to pay five times the market price for foreign supply. That’s not because the cost of making masks has risen, but because sellers are rationing with price. Which is to say: using their control over supply to beggar the state. Moreover, domestic mask makers report that they cannot ramp up production because of a lack of supply of raw materials, some of which are actually made in Wuhan, China. That’s the kind of problem that does not arise when restrictions on offshoring allow manufacturing hubs to develop domestically.

But a shortage of masks is just the beginning. Once a vaccine is developed, the race will be on to manufacture it, and America controls less than 30% of the manufacturing facilities that supply pharmaceuticals to American markets. Indeed, just about the only virus-relevant industries in which we do not have a real capacity shortage today are food and toilet paper, panic buying notwithstanding. Because fortunately for us antimonopolists could not find a way to offshore California and Oregon. If they could have, they surely would have, since both agriculture and timber are labor-intensive industries.

President Trump’s failed attempt to buy a German drug company working on a coronavirus vaccine shows just how damaging free market ideology has been to national security: as Trump should have anticipated given his resistance to the antimonopolists’ approach to trade, the German government nipped the deal in the bud. When an economic agent has market power, the agent can pick its prices, or refuse to sell at all. Only in general equilibrium fantasy is everything for sale, and at a competitive price to boot.

The trouble is: American policymakers, perhaps more than those in any other part of the world, continue to act as though that fantasy were real.

Failures Left and Right

America’s coronavirus predicament is rich with intellectual irony.

Progressives resist free trade ideology, largely out of concern for the effects of trade on American workers. But they seem not to have realized that in doing so they are actually embracing strategy, at least for the benefit of labor.

As a result, progressives simultaneously reject the approach to industrial organization economics that underpins strategic thinking in business: Joseph Schumpeter’s theory of creative destruction, which holds that strategic behavior by firms seeking to achieve and maintain monopolies is ultimately good for society, because it leads to a technological arms race as firms strive to improve supply, distribution, and indeed product quality, in ways that competitors cannot reproduce.

Even if progressives choose to reject Schumpeter’s argument that strategy makes society better off—a proposition that is particularly suspect at the international level, where the availability of tanks ensures that the creative destruction is not always creative—they have much to learn from his focus on the economics of survival.

By the same token, conservatives embrace Schumpeter in arguing for less antitrust enforcement in domestic markets, all the while advocating free trade at the international level and savaging governments for using dumping and tariffs—which is to say, the tools of monopoly—to strengthen their trading positions. It is deeply peculiar to watch the coronavirus expose conservative economists as pie-in-the-sky internationalists. And yet as the global market for coronavirus necessities seizes up, the ideology that urged us to dispense with producing these goods ourselves, out of faith that we might always somehow rely on the support of the rest of the world, provided through the medium of markets, looks pathetically naive.

The cynic might say that inconsistency has snuck up on both progressives and conservatives because each remains too sympathetic to a different domestic constituency.

Dodging a Bullet

America is lucky that a mere virus exposed the bankruptcy of free trade ideology. Because war could have done that instead. It is difficult to imagine how a country that cannot make medical masks—much less a Macbook—would be able to respond effectively to a sustained military attack from one of the many nations that are closing the technological gap long enjoyed by the United States.

The lesson of the coronavirus is: strategy, not antitrust.

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Last week, the DOJ cleared the merger of CVS Health and Aetna (conditional on Aetna’s divesting its Medicare Part D business), a merger that, as I previously noted at a House Judiciary hearing, “presents a creative effort by two of the most well-informed and successful industry participants to try something new to reform a troubled system.” (My full testimony is available here).

Of course it’s always possible that the experiment will fail — that the merger won’t “revolutioniz[e] the consumer health care experience” in the way that CVS and Aetna are hoping. But it’s a low (antitrust) risk effort to address some of the challenges confronting the healthcare industry — and apparently the DOJ agrees.

I discuss the weakness of the antitrust arguments against the merger at length in my testimony. What I particularly want to draw attention to here is how this merger — like many vertical mergers — represents business model innovation by incumbents.

The CVS/Aetna merger is just one part of a growing private-sector movement in the healthcare industry to adopt new (mostly) vertical arrangements that seek to move beyond some of the structural inefficiencies that have plagued healthcare in the United States since World War II. Indeed, ambitious and interesting as it is, the merger arises amidst a veritable wave of innovative, vertical healthcare mergers and other efforts to integrate the healthcare services supply chain in novel ways.

These sorts of efforts (and the current DOJ’s apparent support for them) should be applauded and encouraged. I need not rehash the economic literature on vertical restraints here (see, e.g., Lafontaine & Slade, etc.). But especially where government interventions have already impaired the efficient workings of a market (as they surely have, in spades, in healthcare), it is important not to compound the error by trying to micromanage private efforts to restructure around those constraints.   

Current trends in private-sector-driven healthcare reform

In the past, the most significant healthcare industry mergers have largely been horizontal (i.e., between two insurance providers, or two hospitals) or “traditional” business model mergers for the industry (i.e., vertical mergers aimed at building out managed care organizations). This pattern suggests a sort of fealty to the status quo, with insurers interested primarily in expanding their insurance business or providers interested in expanding their capacity to provide medical services.

Today’s health industry mergers and ventures seem more frequently to be different in character, and they portend an industry-wide experiment in the provision of vertically integrated healthcare that we should enthusiastically welcome.

Drug pricing and distribution innovations

To begin with, the CVS/Aetna deal, along with the also recently approved Cigna-Express Scripts deal, solidifies the vertical integration of pharmacy benefit managers (PBMs) with insurers.

But a number of other recent arrangements and business models center around relationships among drug manufacturers, pharmacies, and PBMs, and these tend to minimize the role of insurers. While not a “vertical” arrangement, per se, Walmart’s generic drug program, for example, offers $4 prescriptions to customers regardless of insurance (the typical generic drug copay for patients covered by employer-provided health insurance is $11), and Walmart does not seek or receive reimbursement from health plans for these drugs. It’s been offering this program since 2006, but in 2016 it entered into a joint buying arrangement with McKesson, a pharmaceutical wholesaler (itself vertically integrated with Rexall pharmacies), to negotiate lower prices. The idea, presumably, is that Walmart will entice consumers to its stores with the lure of low-priced generic prescriptions in the hope that they will buy other items while they’re there. That prospect presumably makes it worthwhile to route around insurers and PBMs, and their reimbursements.

Meanwhile, both Express Scripts and CVS Health (two of the country’s largest PBMs) have made moves toward direct-to-consumer sales themselves, establishing pricing for a small number of drugs independently of health plans and often in partnership with drug makers directly.   

Also apparently focused on disrupting traditional drug distribution arrangements, Amazon has recently purchased online pharmacy PillPack (out from under Walmart, as it happens), and with it received pharmacy licenses in 49 states. The move introduces a significant new integrated distributor/retailer, and puts competitive pressure on other retailers and distributors and potentially insurers and PBMs, as well.

Whatever its role in driving the CVS/Aetna merger (and I believe it is smaller than many reports like to suggest), Amazon’s moves in this area demonstrate the fluid nature of the market, and the opportunities for a wide range of firms to create efficiencies in the market and to lower prices.

At the same time, the differences between Amazon and CVS/Aetna highlight the scope of product and service differentiation that should contribute to the ongoing competitiveness of these markets following mergers like this one.

While Amazon inarguably excels at logistics and the routinizing of “back office” functions, it seems unlikely for the foreseeable future to be able to offer (or to be interested in offering) a patient interface that can rival the service offerings of a brick-and-mortar CVS pharmacy combined with an outpatient clinic and its staff and bolstered by the capabilities of an insurer like Aetna. To be sure, online sales and fulfillment may put price pressure on important, largely mechanical functions, but, like much technology, it is first and foremost a complement to services offered by humans, rather than a substitute. (In this regard it is worth noting that McKesson has long been offering Amazon-like logistics support for both online and brick-and-mortar pharmacies. “‘To some extent, we were Amazon before it was cool to be Amazon,’ McKesson CEO John Hammergren said” on a recent earnings call).

Treatment innovations

Other efforts focus on integrating insurance and treatment functions or on bringing together other, disparate pieces of the healthcare industry in interesting ways — all seemingly aimed at finding innovative, private solutions to solve some of the costly complexities that plague the healthcare market.

Walmart, for example, announced a deal with Quest Diagnostics last year to experiment with offering diagnostic testing services and potentially other basic healthcare services inside of some Walmart stores. While such an arrangement may simply be a means of making doctor-prescribed diagnostic tests more convenient, it may also suggest an effort to expand the availability of direct-to-consumer (patient-initiated) testing (currently offered by Quest in Missouri and Colorado) in states that allow it. A partnership with Walmart to market and oversee such services has the potential to dramatically expand their use.

Capping off (for now) a buying frenzy in recent years that included the purchase of PBM, CatamaranRx, UnitedHealth is seeking approval from the FTC for the proposed merger of its Optum unit with the DaVita Medical Group — a move that would significantly expand UnitedHealth’s ability to offer medical services (including urgent care, outpatient surgeries, and health clinic services), give it a significant group of doctors’ clinics throughout the U.S., and turn UnitedHealth into the largest employer of doctors in the country. But of course this isn’t a traditional managed care merger — it represents a significant bet on the decentralized, ambulatory care model that has been slowly replacing significant parts of the traditional, hospital-centric care model for some time now.

And, perhaps most interestingly, some recent moves are bringing together drug manufacturers and diagnostic and care providers in innovative ways. Swiss pharmaceutical company, Roche, announced recently that “it would buy the rest of U.S. cancer data company Flatiron Health for $1.9 billion to speed development of cancer medicines and support its efforts to price them based on how well they work.” Not only is the deal intended to improve Roche’s drug development process by integrating patient data, it is also aimed at accommodating efforts to shift the pricing of drugs, like the pricing of medical services generally, toward an outcome-based model.

Similarly interesting, and in a related vein, early this year a group of hospital systems including Intermountain Health, Ascension, and Trinity Health announced plans to begin manufacturing generic prescription drugs. This development further reflects the perceived benefits of vertical integration in healthcare markets, and the move toward creative solutions to the unique complexity of coordinating the many interrelated layers of healthcare provision. In this case,

[t]he nascent venture proposes a private solution to ensure contestability in the generic drug market and consequently overcome the failures of contracting [in the supply and distribution of generics]…. The nascent venture, however it solves these challenges and resolves other choices, will have important implications for the prices and availability of generic drugs in the US.

More enforcement decisions like CVS/Aetna and Bayer/Monsanto; fewer like AT&T/Time Warner

In the face of all this disruption, it’s difficult to credit anticompetitive fears like those expressed by the AMA in opposing the CVS-Aetna merger and a recent CEA report on pharmaceutical pricing, both of which are premised on the assumption that drug distribution is unavoidably dominated by a few PBMs in a well-defined, highly concentrated market. Creative arrangements like the CVS-Aetna merger and the initiatives described above (among a host of others) indicate an ease of entry, the fluidity of traditional markets, and a degree of business model innovation that suggest a great deal more competitiveness than static PBM market numbers would suggest.

This kind of incumbent innovation through vertical restructuring is an increasingly important theme in antitrust, and efforts to tar such transactions with purported evidence of static market dominance is simply misguided.

While the current DOJ’s misguided (and, remarkably, continuing) attempt to stop the AT&T/Time Warner merger is an aberrant step in the wrong direction, the leadership at the Antitrust Division generally seems to get it. Indeed, in spite of strident calls for stepped-up enforcement in the always-controversial ag-biotech industry, the DOJ recently approved three vertical ag-biotech mergers in fairly rapid succession.

As I noted in a discussion of those ag-biotech mergers, but equally applicable here, regulatory humility should continue to carry the day when it comes to structural innovation by incumbent firms:

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

John E. Lopatka is A. Robert Noll Distinguished Professor of Law at Penn State Law School

People need to eat. All else equal, the more food that can be produced from an acre of land, the better off they’ll be. Of course, people want to pay as little as possible for their food to boot. At heart, the antitrust analysis of the pending agribusiness mergers requires a simple assessment of their effects on food production and price. But making that assessment raises difficult questions about institutional competence.

Each of the three mergers – Dow/DuPont, ChemChina/Syngenta, and Bayer/Monsanto – involves agricultural products, such as different kinds of seeds, pesticides, and fertilizers. All of these products are inputs in the production of food – the better and cheaper are these products, the more food is produced. The array of products these firms produce invites potentially controversial market definition determinations, but these determinations are standard fare in antitrust law and economics, and conventional analysis handles them tolerably well. Each merger appears to pose overlaps in some product markets, though they seem to be relatively small parts of the firms’ businesses. Traditional merger analysis would examine these markets in properly defined geographic markets, some of which are likely international. The concern in these markets seems to be coordinated interaction, and the analysis of potential anticompetitive coordination would thus focus on concentration and entry barriers. Much could be said about the assumption that product markets perform less competitively as concentration increases, but that is an issue for others or at least another day.

More importantly for my purposes here, to the extent that any of these mergers creates concentration in a market that is competitively problematic and not likely to be cured by new entry, a fix is fairly easy. These are mergers in which asset divestiture is feasible, in which the parties seem willing to divest assets, and in which interested and qualified asset buyers are emerging. To be sure, firms may be willing to divest assets at substantial cost to appease regulators even when competitive problems are illusory, and the cost of a cure in search of an illness is a real social cost. But my concern lies elsewhere.

The parties in each of these mergers have touted innovation as a beneficial byproduct of the deal if not its raison d’être. Innovation effects have made their way into merger analysis, but not smoothly. Innovation can be a kind of efficiency, distinguished from most other efficiencies by its dynamic nature. The benefits of using a plant to its capacity are immediate: costs and prices decrease now. Any benefits of innovation will necessarily be experienced in the future, and the passage of time makes benefits both less certain and less valuable, as people prefer consumption now rather than later. The parties to these mergers in their public statements, to the extent they intend to address antitrust concerns, are implicitly asserting innovation as a defense, a kind of efficiency defense. They do not concede, of course, that their deals will be anticompetitive in any product market. But for antitrust purposes, an accelerated pace of innovation is irrelevant unless the merger appears to threaten competition.

Recognizing increased innovation as a merger defense raises all of the issues that any efficiencies defense raises, and then some. First, can efficiencies be identified?  For instance, patent portfolios can be combined, and the integration of patent rights can lower transaction costs relative to a contractual allocation of rights just as any integration can. In theory, avenues of productive research may not even be recognized until the firms’ intellectual property is combined. A merger may eliminate redundant research efforts, but identifying that which is truly duplicative is often not easy. In all, identifying efficiencies related to research and development is likely to be more difficult than identifying many other kinds of efficiencies. Second, are the efficiencies merger-specific?  The less clearly research and development efficiencies can be identified, the weaker is the claim that they cannot be achieved absent the merger. But in this respect, innovation efficiencies can be more important than most other kinds of efficiencies, because intellectual property sometimes cannot be duplicated as easily as physical property can. Third, can innovation efficiencies be quantified?  If innovation is expected to take the form of an entirely new product, such as a new pesticide, estimating its value is inherently speculative. Fourth, when will efficiencies save a merger that would otherwise be condemned?  An efficiencies defense implies a comparison between the expected harm a merger will cause and the expected benefits it will produce. Arguably those benefits have to be realized by consumers to count at all, but, in any event, a comparison between expected immediate losses of customers in an input market and expected future gains from innovation may be nearly impossible to make. The Merger Guidelines acknowledge that innovation efficiencies can be considered and note many of the concerns just listed. The takeaway is a healthy skepticism of an innovation defense. The defense should generally fail unless the model of anticompetitive harm in product (or service) markets is dubious or the efficiency claim is unusually specific and the likely benefits substantial.

Innovation can enter merger analysis in an even more troublesome way, however: as a club rather than a shield. The Merger Guidelines contemplate that a merger may have unilateral anticompetitive effects if it results in a “reduced incentive to continue with an existing product-development effort or reduced incentive to initiate development of new products.”  The stark case is one in which a merger poses no competitive problem in a product market but would allegedly reduce innovation competition. The best evidence that the elimination of innovation competition might be a reason to oppose one or more of the agribusiness mergers is the recent decision of the European Commission approving the Dow/DuPont merger, subject to various asset divestitures. The Commission, echoing the Guidelines, concluded that the merger would significantly reduce “innovation competition for pesticides” by “[r]emoving the parties’ incentives to continue to pursue ongoing parallel innovation efforts” and by “[r]emoving the parties’ incentives to develop and bring to market new pesticides.”  The agreed upon fix requires DuPont to divest most of its research and development organization.

Enforcement claims that a merger will restrict innovation competition should be met with every bit the skepticism due defense claims that innovation efficiencies save a merger. There is nothing inconsistent in this symmetry. The benefits of innovation, though potentially immense – large enough to dwarf the immediate allocative harm from a lessening of competition in product markets – is speculative. In discounted utility terms, the expected harm will usually exceed the expected benefits, given our limited ability to predict the future. But the potential gains from innovation are immense, and unless we are confident that a merger will reduce innovation, antitrust law should not intervene. We rarely are, at least we rarely should be.

As Geoffrey Manne points out, we still do not know a great deal about the optimal market structure for innovation. Evidence suggests that moderate concentration is most conducive to innovation, but it is not overwhelming, and more importantly no one is suggesting a merger policy that single-mindedly pursues a particular market structure. An examination of incentives to continue existing product development projects or to initiate projects to develop new products is superficially appealing, but its practical utility is elusive. Any firm has an incentive to develop products that increase demand. The Merger Guidelines suggest that a merger will reduce incentives to innovate if the introduction of a new product by one merging firm will capture substantial revenues from the other. The E.C. likely had this effect in mind in concluding that the merged entity would have “lower incentives . . . to innovate than Dow and DuPont separately.”  The Commission also observed that the merged firm would have “a lower ability to innovate” than the two firms separately, but just how a combination of research assets could reduce capability is utterly obscure.

In any event, whether a merger reduces incentives depends not only on the welfare of the merging parties but also on the development activities of actual and would-be competitors. A merged firm cannot afford to have its revenue captured by a new product introduced by a competitor. Of course, innovation by competitors will not spur a firm to develop new products if those competitors do not have the resources needed to innovate. One can imagine circumstances in which resources necessary to innovate in a product market are highly specialized; more realistically, the lack of specialized resources will decrease the pace of innovation. But the concept of specialized resources cannot mean resources a firm has developed that are conducive to innovate and that could be, but have not yet been, developed by other firms. It cannot simply mean a head start, unless it is very long indeed. If the first two firms in an industry build a plant, the fact that a new entrant would have to build a plant is not a sufficient reason to prevent the first two from merging. In any event, what resources are essential to innovation in an area can be difficult to determine.

Assuming essential resources can be identified, how many firms need to have them to create a competitive environment? The Guidelines place the number at “very small” plus one. Elsewhere, the federal antitrust agencies suggest that four firms other than the merged firm are sufficient to maintain innovation competition. We have models, whatever their limitations, that predict price effects in oligopolies. The Guidelines are based on them. But determining the number of firms necessary for competitive innovation is another matter. Maybe two is enough. We know for sure that innovation competition is non-existent if only one firm has the capacity to innovate, but not much else. We know that duplicative research efforts can be wasteful. If two firms would each spend $1 million to arrive at the same place, a merged firm might be able to invest $2 million and go twice as far or reach the first place at half the total cost. This is only to say that a merger can increase innovation efficiency, a possibility that is not likely to justify an otherwise anticompetitive merger but should usually protect from condemnation a merger that is not otherwise anticompetitive.

In the Dow/DuPont merger, the Commission found “specific evidence that the merged entity would have cut back on the amount they spent on developing innovative products.”  Executives of the two firms stated that they expected to reduce research and development spending by around $300 million. But a reduction in spending does not tell us whether innovation will suffer. The issue is innovation efficiency. If the two firms spent, say, $1 billion each on research, $300 million of which was duplicative of the other firm’s research, the merged firm could invest $1.7 billion without reducing productive effort. The Commission complained that the merger would reduce from five to four the number of firms that are “globally active throughout the entire R&D process.”  As noted above, maybe four firms competing are enough. We don’t know. But the Commission also discounts firms with “more limited R&D capabilities,” and the importance to successful innovation of multi-level integration in this industry is not clear.

When a merger is challenged because of an adverse effect on innovation competition, a fix can be difficult. Forced licensing might work, but that assumes that the relevant resource necessary to carry on research and development is intellectual property. More may be required. If tangible assets related to research and development are required, a divestiture might cripple the merged firm. The Commission remedy was to require the merged firm to divest “DuPont’s global R&D organization” that is related to the product operations that must be divested. The firm is permitted to retain “a few limited [R&D] assets that support the part of DuPont’s pesticide business” that is not being divested. In this case, such a divestiture may or may not hobble the merged firm, depending on whether the divested assets would have contributed to the research and development efforts that it will continue to pursue. That the merged firm was willing to accept the research and development divestiture to secure Commission approval does not mean that the divestiture will do no harm to the firm’s continuing research and development activities. Moreover, some product markets at issue in this merger are geographically limited, whereas the likely benefits of innovation are largely international. The implication is that increased concentration in product markets can be avoided by divesting assets to other large agribusinesses that do not operate in the relevant geographic market. But if the Commission insists on preserving five integrated firms active in global research and development activities, DuPont’s research and development activities cannot be divested to one of the other major players, which the Commission identifies as BASF, Bayer, and Syngenta, or firms with which any of them are attempting to merge, namely Monsanto and ChemChina. These are the five firms, of course, that are particularly likely to be interested buyers.

Innovation is important. No one disagrees. But the role of competition in stimulating innovation is not well understood. Except in unusual cases, antitrust institutions are ill-equipped either to recognize innovation efficiencies that save a merger threatening competition in product markets or to condemn mergers that threaten only innovation competition. Indeed, despite maintaining their prerogative to challenge mergers solely on the ground of a reduction in innovation competition, the federal agencies have in fact complained about an adverse effect on innovation in cases that also raise competitive issues in product markets. Innovation is at the heart of the pending agribusiness mergers. How regulators and courts analyze innovation in these cases will say something about whether they perceive their limitations.

Geoffrey A. Manne is Executive Director of the International Center for Law & Economics

Dynamic versus static competition

Ever since David Teece and coauthors began writing about antitrust and innovation in high-tech industries in the 1980s, we’ve understood that traditional, price-based antitrust analysis is not intrinsically well-suited for assessing merger policy in these markets.

For high-tech industries, performance, not price, is paramount — which means that innovation is key:

Competition in some markets may take the form of Schumpeterian rivalry in which a succession of temporary monopolists displace one another through innovation. At any one time, there is little or no head-to-head price competition but there is significant ongoing innovation competition.

Innovative industries are often marked by frequent disruptions or “paradigm shifts” rather than horizontal market share contests, and investment in innovation is an important signal of competition. And competition comes from the continual threat of new entry down the road — often from competitors who, though they may start with relatively small market shares, or may arise in different markets entirely, can rapidly and unexpectedly overtake incumbents.

Which, of course, doesn’t mean that current competition and ease of entry are irrelevant. Rather, because, as Joanna Shepherd noted, innovation should be assessed across the entire industry and not solely within merging firms, conduct that might impede new, disruptive, innovative entry is indeed relevant.

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

As Katz and Shelanski note:

To assess fully the impact of a merger on market performance, merger authorities and courts must examine how a proposed transaction changes market participants’ incentives and abilities to undertake investments in innovation.

At the same time, they point out that

Innovation can dramatically affect the relationship between the pre-merger marketplace and what is likely to happen if the proposed merger is consummated…. [This requires consideration of] how innovation will affect the evolution of market structure and competition. Innovation is a force that could make static measures of market structure unreliable or irrelevant, and the effects of innovation may be highly relevant to whether a merger should be challenged and to the kind of remedy antitrust authorities choose to adopt. (Emphasis added).

Dynamic competition in the ag-biotech industry

These dynamics seem to be playing out in the ag-biotech industry. (For a detailed look at how the specific characteristics of innovation in the ag-biotech industry have shaped industry structure, see, e.g., here (pdf)).  

One inconvenient truth for the “concentration reduces innovation” crowd is that, as the industry has experienced more consolidation, it has also become more, not less, productive and innovative. Between 1995 and 2015, for example, the market share of the largest seed producers and crop protection firms increased substantially. And yet, over the same period, annual industry R&D spending went up nearly 750 percent. Meanwhile, the resulting innovations have increased crop yields by 22%, reduced chemical pesticide use by 37%, and increased farmer profits by 68%.

In her discussion of the importance of considering the “innovation ecosystem” in assessing the innovation effects of mergers in R&D-intensive industries, Joanna Shepherd noted that

In many consolidated firms, increases in efficiency and streamlining of operations free up money and resources to source external innovation. To improve their future revenue streams and market share, consolidated firms can be expected to use at least some of the extra resources to acquire external innovation. This increase in demand for externally-sourced innovation increases the prices paid for external assets, which, in turn, incentivizes more early-stage innovation in small firms and biotech companies. Aggregate innovation increases in the process!

The same dynamic seems to play out in the ag-biotech industry, as well:

The seed-biotechnology industry has been reliant on small and medium-sized enterprises (SMEs) as sources of new innovation. New SME startups (often spinoffs from university research) tend to specialize in commercial development of a new research tool, genetic trait, or both. Significant entry by SMEs into the seed-biotechnology sector began in the late 1970s and early 1980s, with a second wave of new entrants in the late 1990s and early 2000s. In recent years, exits have outnumbered entrants, and by 2008 just over 30 SMEs specializing in crop biotechnology were still active. The majority of the exits from the industry were the result of acquisition by larger firms. Of 27 crop biotechnology SMEs that were acquired between 1985 and 2009, 20 were acquired either directly by one of the Big 6 or by a company that itself was eventually acquired by a Big 6 company.

While there is more than one way to interpret these statistics (and they are often used by merger opponents, in fact, to lament increasing concentration), they are actually at least as consistent with an increase in innovation through collaboration (and acquisition) as with a decrease.

For what it’s worth, this is exactly how the startup community views the innovation ecosystem in the ag-biotech industry, as well. As the latest AgFunder AgTech Investing Report states:

The large agribusinesses understand that new innovation is key to their future, but the lack of M&A [by the largest agribusiness firms in 2016] highlighted their uncertainty about how to approach it. They will need to make more acquisitions to ensure entrepreneurs keep innovating and VCs keep investing.

It’s also true, as Diana Moss notes, that

Competition maximizes the potential for numerous collaborations. It also minimizes incentives to refuse to license, to impose discriminatory restrictions in technology licensing agreements, or to tacitly “agree” not to compete…. All of this points to the importance of maintaining multiple, parallel R&D pipelines, a notion that was central to the EU’s decision in Dow-DuPont.

And yet collaboration and licensing have long been prevalent in this industry. Examples are legion, but here are just a few significant ones:

  • Monsanto’s “global licensing agreement for the use of the CRISPR-Cas genome-editing technology in agriculture with the Broad Institute of MIT and Harvard.”
  • Dow and Arcadia Biosciences’ “strategic collaboration to develop and commercialize new breakthrough yield traits and trait stacks in corn.”
  • Monsanto and the University of Nebraska-Lincoln’s “licensing agreement to develop crops tolerant to the broadleaf herbicide dicamba. This agreement is based on discoveries by UNL plant scientists.”

Both large and small firms in the ag-biotech industry continually enter into new agreements like these. See, e.g., here and here for a (surely incomplete) list of deals in 2016 alone.

At the same time, across the industry, new entry has been rampant despite increased M&A activity among the largest firms. Recent years have seen venture financing in AgTech skyrocket — from $400 million in 2010 to almost $5 billion in 2015 — and hundreds of startups now enter the industry annually.

The pending mergers

Today’s pending mergers are consistent with this characterization of a dynamic market in which structure is being driven by incentives to innovate, rather than monopolize. As Michael Sykuta points out,

The US agriculture sector has been experiencing consolidation at all levels for decades, even as the global ag economy has been growing and becoming more diverse. Much of this consolidation has been driven by technological changes that created economies of scale, both at the farm level and beyond.

These deals aren’t fundamentally about growing production capacity, expanding geographic reach, or otherwise enhancing market share; rather, each is a fundamental restructuring of the way the companies do business, reflecting today’s shifting agricultural markets, and the advanced technology needed to respond to them.

Technological innovation is unpredictable, often serendipitous, and frequently transformative of the ways firms organize and conduct their businesses. A company formed to grow and sell hybrid seeds in the 1920s, for example, would either have had to evolve or fold by the end of the century. Firms today will need to develop (or purchase) new capabilities and adapt to changing technology, scientific knowledge, consumer demand, and socio-political forces. The pending mergers seemingly fit exactly this mold.

As Allen Gibby notes, these mergers are essentially vertical combinations of disparate, specialized pieces of an integrated whole. Take the proposed Bayer/Monsanto merger, for example. Bayer is primarily a chemicals company, developing advanced chemicals to protect crops and enhance crop growth. Monsanto, on the other hand, primarily develops seeds and “seed traits” — advanced characteristics that ensure the heartiness of the seeds, give them resistance to herbicides and pesticides, and speed their fertilization and growth. In order to translate the individual advances of each into higher yields, it is important that these two functions work successfully together. Doing so enhances crop growth and protection far beyond what, say, spreading manure can accomplish — or either firm could accomplish working on its own.

The key is that integrated knowledge is essential to making this process function. Developing seed traits to work well with (i.e., to withstand) certain pesticides requires deep knowledge of the pesticide’s chemical characteristics, and vice-versa. Processing huge amounts of data to determine when to apply chemical treatments or to predict a disease requires not only that the right information is collected, at the right time, but also that it is analyzed in light of the unique characteristics of the seeds and chemicals. Increased communications and data-sharing between manufacturers increases the likelihood that farmers will use the best products available in the right quantity and at the right time in each field.

Vertical integration solves bargaining and long-term planning problems by unifying the interests (and the management) of these functions. Instead of arm’s length negotiation, a merged Bayer/Monsanto, for example, may better maximize R&D of complicated Ag/chem products through fully integrated departments and merged areas of expertise. A merged company can also coordinate investment decisions (instead of waiting up to 10 years to see what the other company produces), avoid duplication of research, adapt to changing conditions (and the unanticipated course of research), pool intellectual property, and bolster internal scientific capability more efficiently. All told, the merged company projects spending about $16 billion on R&D over the next six years. Such coordinated investment will likely garner far more than either company could from separately spending even the same amount to develop new products. 

Controlling an entire R&D process and pipeline of traits for resistance, chemical treatments, seeds, and digital complements would enable the merged firm to better ensure that each of these products works together to maximize crop yields, at the lowest cost, and at greater speed. Consider the advantages that Apple’s tightly-knit ecosystem of software and hardware provides to computer and device users. Such tight integration isn’t the only way to compete (think Android), but it has frequently proven to be a successful model, facilitating some functions (e.g., handoff between Macs and iPhones) that are difficult if not impossible in less-integrated systems. And, it bears noting, important elements of Apple’s innovation have come through acquisition….

Conclusion

As LaFontaine and Slade have made clear, theoretical concerns about the anticompetitive consequences of vertical integrations are belied by the virtual absence of empirical support:

Under most circumstances, profit–maximizing vertical–integration and merger decisions are efficient, not just from the firms’ but also from the consumers’ points of view.

Other antitrust scholars are skeptical of vertical-integration fears because firms normally have strong incentives to deal with providers of complementary products. Bayer and Monsanto, for example, might benefit enormously from integration, but if competing seed producers seek out Bayer’s chemicals to develop competing products, there’s little reason for the merged firm to withhold them: Even if the new seeds out-compete Monsanto’s, Bayer/Monsanto can still profit from providing the crucial input. Its incentive doesn’t necessarily change if the merger goes through, and whatever “power” Bayer has as an input is a function of its scientific know-how, not its merger with Monsanto.

In other words, while some competitors could find a less hospitable business environment, consumers will likely suffer no apparent ill effects, and continue to receive the benefits of enhanced product development and increased productivity.

That’s what we’d expect from innovation-driven integration, and antitrust enforcers should be extremely careful before thwarting or circumscribing these mergers lest they end up thwarting, rather than promoting, consumer welfare.

Nicolas Petit is Professor of Law at the University of Liege (Belgium) and Research Professor at the University of South Australia (UniSA)

This symposium offers a good opportunity to look again into the complex relation between concentration and innovation in antitrust policy. Whilst the details of the EC decision in Dow/Dupont remain unknown, the press release suggests that the issue of “incentives to innovate” was central to the review. Contrary to what had leaked in the antitrust press, the decision has apparently backed off from the introduction of a new “model”, and instead followed a more cautious approach. After a quick reminder of the conventional “appropriability v cannibalizationframework that drives merger analysis in innovation markets (1), I make two sets of hopefully innovative remarks on appropriability and IP rights (2) and on cannibalization in the ag-biotech sector (3).

Appropriability versus cannibalization

Antitrust economics 101 teach that mergers affect innovation incentives in two polar ways. A merger may increase innovation incentives. This occurs when the increment in power over price or output achieved through merger enhances the appropriability of the social returns to R&D. The appropriability effect of mergers is often tied to Joseph Schumpeter, who observed that the use of “protecting devices” for past investments like patent protection or trade secrecy constituted a “normal elemen[t] of rational management”. The appropriability effect can in principle be observed at firm – specific incentives – and industry – general incentives – levels, because actual or potential competitors can also use the M&A market to appropriate the payoffs of R&D investments.

But a merger may decrease innovation incentives. This happens when the increased industry position achieved through merger discourages the introduction of new products, processes or services. This is because an invention will cannibalize the merged entity profits in proportions larger as would be the case in a more competitive market structure. This idea is often tied to Kenneth Arrow who famously observed that a “preinvention monopoly power acts as a strong disincentive to further innovation”.

Schumpeter’s appropriability hypothesis and Arrow’s cannibalization theory continue to drive much of the discussion on concentration and innovation in antitrust economics. True, many efforts have been made to overcome, reconcile or bypass both views of the world. Recent studies by Carl Shapiro or Jon Baker are worth mentioning. But Schumpeter and Arrow remain sticky references in any discussion of the issue. Perhaps more than anything, the persistence of their ideas denotes that both touched a bottom point when they made their seminal contribution, laying down two systems of belief on the workings of innovation-driven markets.

Now beyond the theory, the appropriability v cannibalization gravitational models provide from the outset an appealing framework for the examination of mergers in R&D driven industries in general. From an operational perspective, the antitrust agency will attempt to understand if the transaction increases appropriability – which leans in favour of clearance – or cannibalization – which leans in favour of remediation. At the same time, however, the downside of the appropriability v cannibalization framework (and of any framework more generally) may be to oversimplify our understanding of complex phenomena. This, in turn, prompts two important observations on each branch of the framework.

Appropriability and IP rights

Any antitrust agency committed to promoting competition and innovation should consider mergers in light of the degree of appropriability afforded by existing protecting devices (essentially contracts and entitlements). This is where Intellectual Property (“IP”) rights become relevant to the discussion. In an industry with strong IP rights, the merging parties (and its rivals) may be able to appropriate the social returns to R&D without further corporate concentration. Put differently, the stronger the IP rights, the lower the incremental contribution of a merger transaction to innovation, and the higher the case for remediation.

This latter proposition, however, rests on a heavy assumption: that IP rights confer perfect appropriability. The point is, however, far from obvious. Most of us know that – and our antitrust agencies’ misgivings with other sectors confirm it – IP rights are probabilistic in nature. There is (i) no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change. Arrow himself called for caution, noting that “Patent laws would have to be unimaginably complex and subtle to permit [such] appropriation on a large scale”. A thorough inquiry into the specific industry-strength of IP rights that goes beyond patent data and statistics thus constitutes a necessary step in merger review.

But it is not a sufficient one. The proposition that strong IP rights provide appropriability is essentially valid if the observed pre-merger market situation is one where several IP owners compete on differentiated products and as a result wield a degree of market power. In contrast, the proposition is essentially invalid if the observed pre-merger market situation leans more towards the competitive equilibrium and IP owners compete at prices closer to costs. In both variants, the agency should thus look carefully at the level and evolution of prices and costs, including R&D ones, in the pre-merger industry. Moreover, in the second variant, the agency ought to consider as a favourable appropriability factor any increase of the merging entity’s power over price, but also any improvement of its power over cost. By this, I have in mind efficiency benefits, which can arise as the result of economies of scale (in manufacturing but also in R&D), but also when the transaction combines complementary technological and marketing assets. In Dow/Dupont, no efficiency argument has apparently been made by the parties, so it is difficult to understand if and how such issues have played a role in the Commission’s assessment.

Cannibalization, technological change, and drastic innovation

Arrow’s cannibalization theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fails to capture that successful inventions create new technology frontiers, and with them entirely novel needs that even a monopolist has an incentive to serve. This can be understood with an example taken from the ag-biotech field. It is undisputed that progress in crop protection science has led to an expanding range of resistant insects, weeds, and pathogens. This, in turn, is one (if not the main) key drivers of ag-tech research. In a 2017 paper published in Pest Management Science, Sparks and Lorsbach observe that:

resistance to agrochemicals is an ongoing driver for the development of new chemical control options, along with an increased emphasis on resistance management and how these new tools can fit into resistance management programs. Because resistance is such a key driver for the development of new agrochemicals, a highly prized attribute for a new agrochemical is a new MoA [method of action] that is ideally a new molecular target either in an existing target site (e.g., an unexploited binding site in the voltage-gated sodium channel), or new/under-utilized target site such as calcium channels.

This, and other factors, leads them to conclude that:

even with fewer companies overall involved in agrochemical discovery, innovation continues, as demonstrated by the continued introduction of new classes of agrochemicals with new MoAs.

Sparks, Hahn, and Garizi make a similar point. They stress in particular that the discovery of natural products (NPs) which are the “output of nature’s chemical laboratory” is today a main driver of crop protection research. According to them:

NPs provide very significant value in identifying new MoAs, with 60% of all agrochemical MoAs being, or could have been, defined by a NP. This information again points to the importance of NPs in agrochemical discovery, since new MoAs remain a top priority for new agrochemicals.

More generally, the point is not that Arrow’s cannibalization theory is wrong. Arrow’s work convincingly explains monopolists’ low incentives to invest in substitute invention. Instead, the point is that Arrow’s cannibalization theory is narrower than often assumed in the antitrust policy literature. Admittedly, Arrow’s cannibalization theory is relevant in industries primarily driven by a process of cumulative innovation. But it is much less helpful to understand the incentives of a monopolist in industries subject to technological change. As a result of this, the first question that should guide an antitrust agency investigation is empirical in nature: is the industry under consideration one driven by cumulative innovation, or one where technology disruption, shocks, and serendipity incentivize drastic innovation?

Note that exogenous factors beyond technological frontiers also promote drastic innovation. This point ought not to be overlooked. A sizeable amount of the specialist scientific literature stresses the powerful innovation incentives created by changing dietary habits, new diseases (e.g. the Zika virus), global population growth, and environmental challenges like climate change and weather extremes. In 2015, Jeschke noted:

In spite of the significant consolidation of the agrochemical companies, modern agricultural chemistry is vital and will have the opportunity to shape the future of agriculture by continuing to deliver further innovative integrated solutions. 

Words of wisdom caution for antitrust agencies tasked with the complex mission of reviewing mergers in the ag-biotech industry?