Archives For contestability

During the exceptional rise in stock-market valuations from March 2020 to January 2022, both equity investors and antitrust regulators have implicitly agreed that so-called “Big Tech” firms enjoyed unbeatable competitive advantages as gatekeepers with largely unmitigated power over the digital ecosystem.

Investors bid up the value of tech stocks to exceptional levels, anticipating no competitive threat to incumbent platforms. Antitrust enforcers and some legislators have exhibited belief in the same underlying assumption. In their case, it has spurred advocacy of dramatic remedies—including breaking up the Big Tech platforms—as necessary interventions to restore competition. 

Other voices in the antitrust community have been more circumspect. A key reason is the theory of contestable markets, developed in the 1980s by the late William Baumol and other economists, which holds that even extremely large market shares are at best a potential indicator of market power. To illustrate, consider the extreme case of a market occupied by a single firm. Intuitively, the firm would appear to have unqualified pricing power. Not so fast, say contestable market theorists. Suppose entry costs into the market are low and consumers can easily move to other providers. This means that the apparent monopolist will act as if the market is populated by other competitors. The takeaway: market share alone cannot demonstrate market power without evidence of sufficiently strong barriers to market entry.

While regulators and some legislators have overlooked this inconvenient principle, it appears the market has not. To illustrate, look no further than the Feb. 3 $230 billion crash in the market value of Meta Platforms—parent company of Facebook, Instagram, and WhatsApp, among other services.

In its antitrust suit against Meta, the Federal Trade Commission (FTC) has argued that Meta’s Facebook service enjoys a social-networking monopoly, a contention that the judge in the case initially rejected in June 2021 as so lacking in factual support that the suit was provisionally dismissed. The judge’s ruling (which he withdrew last month, allowing the suit to go forward after the FTC submitted a revised complaint) has been portrayed as evidence for the view that existing antitrust law sets overly demanding evidentiary standards that unfairly shelter corporate defendants. 

Yet, the record-setting single-day loss in Meta’s value suggests the evidentiary standard is set just about right and the judge’s skepticism was fully warranted. Consider one of the principal reasons behind Meta’s plunge in value: its service had suffered substantial losses of users to TikTok, a formidable rival in a social-networking market in which the FTC claims that Facebook faces no serious competition. The market begs to differ. In light of the obvious competitive threat posed by TikTok and other services, investors reassessed Facebook’s staying power, which was then reflected in its owner Meta’s downgraded stock price.

Just as the investment bubble that had supported the stock market’s case for Meta has popped, so too must the regulatory bubble that had supported the FTC’s antitrust case against it. Investors’ reevaluation rebuts the FTC’s strained market definition that had implausibly excluded TikTok as a competitor.

Even more fundamentally, the market’s assessment shows that Facebook’s users face nominal switching costs—in which case, its leadership position is contestable and the Facebook “monopoly” is not much of a monopoly. While this conclusion might seem surprising, Facebook’s vulnerability is hardly exceptional: Nokia, Blackberry, AOL, Yahoo, Netscape, and PalmPilot illustrate how often seemingly unbeatable tech leaders have been toppled with remarkable speed.

The unraveling of the FTC’s case against what would appear to be an obviously dominant platform should be a wake-up call for those policymakers who have embraced populist antitrust’s view that existing evidentiary requirements, which minimize the risk of “false positive” findings of anticompetitive conduct, should be set aside as an inconvenient obstacle to regulatory and judicial intervention. 

None of this should be interpreted to deny that concentration levels in certain digital markets raise significant antitrust concerns that merit close scrutiny. In particular, regulators have overlooked how some leading platforms have devalued intellectual-property rights in a manner that distorts technology and content markets by advantaging firms that operate integrated product and service ecosystems while disadvantaging firms that specialize in supplying the technological and creative inputs on which those ecosystems rely.  

The fundamental point is that potential risks to competition posed by any leading platform’s business practices can be assessed through rigorous fact-based application of the existing toolkit of antitrust analysis. This is critical to evaluate whether a given firm likely occupies a transitory, rather than durable, leadership position. The plunge in Meta’s stock in response to a revealed competitive threat illustrates the perils of discarding that surgical toolkit in favor of a blunt “big is bad” principle.

Contrary to what has become an increasingly common narrative in policy discussions and political commentary, the existing framework of antitrust analysis was not designed by scholars strategically acting to protect “big business.” Rather, this framework was designed and refined by scholars dedicated to rationalizing, through the rigorous application of economic principles, an incoherent body of case law that had often harmed consumers by shielding incumbents against threats posed by more efficient rivals. The legal shortcuts being pursued by antitrust populists to detour around appropriately demanding evidentiary requirements are writing a “back to the future” script that threatens to return antitrust law to that unfortunate predicament.

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

Geoffrey A. Manne is Executive Director of the International Center for Law & Economics

Dynamic versus static competition

Ever since David Teece and coauthors began writing about antitrust and innovation in high-tech industries in the 1980s, we’ve understood that traditional, price-based antitrust analysis is not intrinsically well-suited for assessing merger policy in these markets.

For high-tech industries, performance, not price, is paramount — which means that innovation is key:

Competition in some markets may take the form of Schumpeterian rivalry in which a succession of temporary monopolists displace one another through innovation. At any one time, there is little or no head-to-head price competition but there is significant ongoing innovation competition.

Innovative industries are often marked by frequent disruptions or “paradigm shifts” rather than horizontal market share contests, and investment in innovation is an important signal of competition. And competition comes from the continual threat of new entry down the road — often from competitors who, though they may start with relatively small market shares, or may arise in different markets entirely, can rapidly and unexpectedly overtake incumbents.

Which, of course, doesn’t mean that current competition and ease of entry are irrelevant. Rather, because, as Joanna Shepherd noted, innovation should be assessed across the entire industry and not solely within merging firms, conduct that might impede new, disruptive, innovative entry is indeed relevant.

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

As Katz and Shelanski note:

To assess fully the impact of a merger on market performance, merger authorities and courts must examine how a proposed transaction changes market participants’ incentives and abilities to undertake investments in innovation.

At the same time, they point out that

Innovation can dramatically affect the relationship between the pre-merger marketplace and what is likely to happen if the proposed merger is consummated…. [This requires consideration of] how innovation will affect the evolution of market structure and competition. Innovation is a force that could make static measures of market structure unreliable or irrelevant, and the effects of innovation may be highly relevant to whether a merger should be challenged and to the kind of remedy antitrust authorities choose to adopt. (Emphasis added).

Dynamic competition in the ag-biotech industry

These dynamics seem to be playing out in the ag-biotech industry. (For a detailed look at how the specific characteristics of innovation in the ag-biotech industry have shaped industry structure, see, e.g., here (pdf)).  

One inconvenient truth for the “concentration reduces innovation” crowd is that, as the industry has experienced more consolidation, it has also become more, not less, productive and innovative. Between 1995 and 2015, for example, the market share of the largest seed producers and crop protection firms increased substantially. And yet, over the same period, annual industry R&D spending went up nearly 750 percent. Meanwhile, the resulting innovations have increased crop yields by 22%, reduced chemical pesticide use by 37%, and increased farmer profits by 68%.

In her discussion of the importance of considering the “innovation ecosystem” in assessing the innovation effects of mergers in R&D-intensive industries, Joanna Shepherd noted that

In many consolidated firms, increases in efficiency and streamlining of operations free up money and resources to source external innovation. To improve their future revenue streams and market share, consolidated firms can be expected to use at least some of the extra resources to acquire external innovation. This increase in demand for externally-sourced innovation increases the prices paid for external assets, which, in turn, incentivizes more early-stage innovation in small firms and biotech companies. Aggregate innovation increases in the process!

The same dynamic seems to play out in the ag-biotech industry, as well:

The seed-biotechnology industry has been reliant on small and medium-sized enterprises (SMEs) as sources of new innovation. New SME startups (often spinoffs from university research) tend to specialize in commercial development of a new research tool, genetic trait, or both. Significant entry by SMEs into the seed-biotechnology sector began in the late 1970s and early 1980s, with a second wave of new entrants in the late 1990s and early 2000s. In recent years, exits have outnumbered entrants, and by 2008 just over 30 SMEs specializing in crop biotechnology were still active. The majority of the exits from the industry were the result of acquisition by larger firms. Of 27 crop biotechnology SMEs that were acquired between 1985 and 2009, 20 were acquired either directly by one of the Big 6 or by a company that itself was eventually acquired by a Big 6 company.

While there is more than one way to interpret these statistics (and they are often used by merger opponents, in fact, to lament increasing concentration), they are actually at least as consistent with an increase in innovation through collaboration (and acquisition) as with a decrease.

For what it’s worth, this is exactly how the startup community views the innovation ecosystem in the ag-biotech industry, as well. As the latest AgFunder AgTech Investing Report states:

The large agribusinesses understand that new innovation is key to their future, but the lack of M&A [by the largest agribusiness firms in 2016] highlighted their uncertainty about how to approach it. They will need to make more acquisitions to ensure entrepreneurs keep innovating and VCs keep investing.

It’s also true, as Diana Moss notes, that

Competition maximizes the potential for numerous collaborations. It also minimizes incentives to refuse to license, to impose discriminatory restrictions in technology licensing agreements, or to tacitly “agree” not to compete…. All of this points to the importance of maintaining multiple, parallel R&D pipelines, a notion that was central to the EU’s decision in Dow-DuPont.

And yet collaboration and licensing have long been prevalent in this industry. Examples are legion, but here are just a few significant ones:

  • Monsanto’s “global licensing agreement for the use of the CRISPR-Cas genome-editing technology in agriculture with the Broad Institute of MIT and Harvard.”
  • Dow and Arcadia Biosciences’ “strategic collaboration to develop and commercialize new breakthrough yield traits and trait stacks in corn.”
  • Monsanto and the University of Nebraska-Lincoln’s “licensing agreement to develop crops tolerant to the broadleaf herbicide dicamba. This agreement is based on discoveries by UNL plant scientists.”

Both large and small firms in the ag-biotech industry continually enter into new agreements like these. See, e.g., here and here for a (surely incomplete) list of deals in 2016 alone.

At the same time, across the industry, new entry has been rampant despite increased M&A activity among the largest firms. Recent years have seen venture financing in AgTech skyrocket — from $400 million in 2010 to almost $5 billion in 2015 — and hundreds of startups now enter the industry annually.

The pending mergers

Today’s pending mergers are consistent with this characterization of a dynamic market in which structure is being driven by incentives to innovate, rather than monopolize. As Michael Sykuta points out,

The US agriculture sector has been experiencing consolidation at all levels for decades, even as the global ag economy has been growing and becoming more diverse. Much of this consolidation has been driven by technological changes that created economies of scale, both at the farm level and beyond.

These deals aren’t fundamentally about growing production capacity, expanding geographic reach, or otherwise enhancing market share; rather, each is a fundamental restructuring of the way the companies do business, reflecting today’s shifting agricultural markets, and the advanced technology needed to respond to them.

Technological innovation is unpredictable, often serendipitous, and frequently transformative of the ways firms organize and conduct their businesses. A company formed to grow and sell hybrid seeds in the 1920s, for example, would either have had to evolve or fold by the end of the century. Firms today will need to develop (or purchase) new capabilities and adapt to changing technology, scientific knowledge, consumer demand, and socio-political forces. The pending mergers seemingly fit exactly this mold.

As Allen Gibby notes, these mergers are essentially vertical combinations of disparate, specialized pieces of an integrated whole. Take the proposed Bayer/Monsanto merger, for example. Bayer is primarily a chemicals company, developing advanced chemicals to protect crops and enhance crop growth. Monsanto, on the other hand, primarily develops seeds and “seed traits” — advanced characteristics that ensure the heartiness of the seeds, give them resistance to herbicides and pesticides, and speed their fertilization and growth. In order to translate the individual advances of each into higher yields, it is important that these two functions work successfully together. Doing so enhances crop growth and protection far beyond what, say, spreading manure can accomplish — or either firm could accomplish working on its own.

The key is that integrated knowledge is essential to making this process function. Developing seed traits to work well with (i.e., to withstand) certain pesticides requires deep knowledge of the pesticide’s chemical characteristics, and vice-versa. Processing huge amounts of data to determine when to apply chemical treatments or to predict a disease requires not only that the right information is collected, at the right time, but also that it is analyzed in light of the unique characteristics of the seeds and chemicals. Increased communications and data-sharing between manufacturers increases the likelihood that farmers will use the best products available in the right quantity and at the right time in each field.

Vertical integration solves bargaining and long-term planning problems by unifying the interests (and the management) of these functions. Instead of arm’s length negotiation, a merged Bayer/Monsanto, for example, may better maximize R&D of complicated Ag/chem products through fully integrated departments and merged areas of expertise. A merged company can also coordinate investment decisions (instead of waiting up to 10 years to see what the other company produces), avoid duplication of research, adapt to changing conditions (and the unanticipated course of research), pool intellectual property, and bolster internal scientific capability more efficiently. All told, the merged company projects spending about $16 billion on R&D over the next six years. Such coordinated investment will likely garner far more than either company could from separately spending even the same amount to develop new products. 

Controlling an entire R&D process and pipeline of traits for resistance, chemical treatments, seeds, and digital complements would enable the merged firm to better ensure that each of these products works together to maximize crop yields, at the lowest cost, and at greater speed. Consider the advantages that Apple’s tightly-knit ecosystem of software and hardware provides to computer and device users. Such tight integration isn’t the only way to compete (think Android), but it has frequently proven to be a successful model, facilitating some functions (e.g., handoff between Macs and iPhones) that are difficult if not impossible in less-integrated systems. And, it bears noting, important elements of Apple’s innovation have come through acquisition….

Conclusion

As LaFontaine and Slade have made clear, theoretical concerns about the anticompetitive consequences of vertical integrations are belied by the virtual absence of empirical support:

Under most circumstances, profit–maximizing vertical–integration and merger decisions are efficient, not just from the firms’ but also from the consumers’ points of view.

Other antitrust scholars are skeptical of vertical-integration fears because firms normally have strong incentives to deal with providers of complementary products. Bayer and Monsanto, for example, might benefit enormously from integration, but if competing seed producers seek out Bayer’s chemicals to develop competing products, there’s little reason for the merged firm to withhold them: Even if the new seeds out-compete Monsanto’s, Bayer/Monsanto can still profit from providing the crucial input. Its incentive doesn’t necessarily change if the merger goes through, and whatever “power” Bayer has as an input is a function of its scientific know-how, not its merger with Monsanto.

In other words, while some competitors could find a less hospitable business environment, consumers will likely suffer no apparent ill effects, and continue to receive the benefits of enhanced product development and increased productivity.

That’s what we’d expect from innovation-driven integration, and antitrust enforcers should be extremely careful before thwarting or circumscribing these mergers lest they end up thwarting, rather than promoting, consumer welfare.

Diana L. Moss is President of the American Antitrust Institute

Innovation Competition in the Spotlight

Innovation is more and more in the spotlight as questions grow about concentration and declining competition in the U.S. economy. These questions come not only from advocates for more vigorous competition enforcement but also, increasingly, from those who adhere to the school of thought that consolidation tends to generate procompetitive efficiencies. On March 27th, the European Commission issued its decision approving the Dow-DuPont merger, subject to divestitures of DuPont’s global R&D agrichemical assets to preserve price and innovation competition.

Before we read too much into what the EU decision in Dow-DuPont means for merger review in the U.S., remember that agriculture differs markedly across regions. Europe uses very little genetically modified (or transgenic) seed, whereas row crop acreage in the U.S. is planted mostly with it. This cautions against drawing major implications of the EU’s decision across jurisdictions.

This post unpacks the mergers of Dow-DuPont and Monsanto-Bayer in the U.S. and what they mean for innovation competition.

A Troubled Landscape? Past Consolidation in Agricultural Biotechnology

If approved as proposed, the mergers of Dow-DuPont and Monsanto-Bayer would reduce the field of Big 6 agricultural biotechnology (ag-biotech) firms to the Big 4. This has raised concerns about potentially higher prices for traits, seeds, and agrichemicals, less choice, and less innovation. The two mergers would make a 3rd wave of consolidation in the industry since the mid-1980s, when transgenic technology first emerged. Past consolidation has materially affected the structure of the markets. This is particularly true in crop seed, where relative to other agricultural input sectors, the level of concentration (and increases in concentration) over time is the highest.

Growers and consumers feel the effects of these changes. Consumers pay attention to their choices at the grocery store, which have arguably diminished and for which they pay prices that have risen at rates in excess of inflation. And the states in which agriculture is a major economic activity worry about their growers and the prices they pay for transgenic seed, agrichemicals, and fertilizers. Farmers we spoke to note, for example, that weeds that are resistant to the herbicide Roundup have evolved over time, making it no longer as effective as it once was. Dependence on seed and chemical cropping systems with declining effectiveness (due to resistance) has been met by the industry with newer and more expensive traited seed and different agrichemicals. With consolidation, these alternatives have dwindled.

These are not frivolous concerns. Empirical evidence shows that “technology fees” on transgenic corn, soybean, and cotton seed make up a significant proportion of total seed costs. The USDA notes that the prices of farm inputs, led by crop seed, generally have risen faster over the last 20 years than the prices farmers have received for their commodities. Moreover, seed price increases have outpaced yield increases over time. And finally, the USDA has determined that increasing levels of concentration in agricultural input markets (including crop seed) are no longer generally associated with higher R&D or a permanent rise in R&D intensity.

Putting the Squeeze on Growers and Consumers

The “squeeze” on growers and consumers highlights the fact that ag-biotech innovation comes at an increasingly higher price – a price that many worry will increase if the Dow-DuPont and Monsanto-Bayer mergers go through. These concerns are magnified by the structure of the food supply chain where we see a lot of growers and consumers at either end but not a lot of competition in the middle. In the middle are the ag-biotech firms that innovate traits, seeds, and agrichemicals; food processors such as grain millers and meatpackers; food manufacturers; distributors; and retail grocers.

Almost every sector has been affected by significant consolidation over the last two decades, some of which has been blocked, but a lot of which has not. For example, U.S. antitrust enforcers stopped the mergers of beef packers JBS and National Beef and broadline food distributors Sysco and USFoods. But key mergers that many believed raised significant competitive concerns went through, including Tyson-Hillshire Brands (pork), ConAgra-Horizon Mills (flour), Monsanto-Delta & Pine Land (cotton), and Safeway-Albertsons (grocery).

Aside from concerns over price, quality, and innovation, consolidation in “hourglass” shaped supply chains raises other issues. For example, it is often motivated by incentives to bulk up to bargain more effectively vis-a-vis more powerful input suppliers or customers. As we have seen with health care providers and health insurers, mergers for this purpose can trigger further consolidation, creating a domino effect. A bottlenecked supply chain also decreases resiliency. With less competition, it is more exposed to exogenous shocks such as bioterrorism or food-borne disease. That’s a potential food security problem.

Innovation Competition and the Agricultural Biotechnology Mergers

The Dow-DuPont and Monsanto-Bayer merger proposals raise a number of issues. One is significant overlap in seed, likely to result in a duopoly in corn and soybeans and a dominant firm (Monsanto) in cotton. A second concern is that the mergers would create or enhance substantial vertical integration. Where some arguments for integration can carry weight in a Guidelines analysis, here there is economic evidence from soybeans and cotton indicating that prices tend to be higher under vertical integration than under cross-licensing arrangements.

Moreover, the “platforms” resulting from the mergers are likely to be engineered for the purpose of creating exclusive packages of traits, seeds, and agrichemicals that are less likely to interoperate with rival products. This could raise entry barriers for smaller innovators and reduce or cut off access to resources needed to compete effectively. Indeed, one farmer noted the constraint of being locked into a single traits-seeds-chemicals platform in a market with already limited competition “[I] can’t mix chemicals with other companies’ products to remedy Roundup resistance.”

A third concern raised by the mergers is the potential elimination of competition in innovation markets. The DOJ/FTC Horizontal Merger Guidelines (§6.4) note that a merger may diminish innovation competition through curtailment of “innovative efforts below the level that would prevail in the absence of the merger.” This is especially the case when the merging firms are each other’s close competitors (e.g., as in the DOJ’s case against Applied Materials and Tokyo Electron). Dow, DuPont, Monsanto, and Bayer are four of only six ag-biotech rivals.

Preserving Parallel Path R&D Pipelines

In contrast to arguments that the mergers would combine only complementary assets, the R&D pipelines for all four firms show overlaps in major areas of traits, seeds, and crop protection. This supports the notion that the R&D pipelines compete head-to-head for technology intended for commercialization in U.S. markets. Maintaining competition in R&D ensures incentives remain strong to continue existing and prospective product development programs. This is particularly true in industries like ag-biotech (and pharma) where R&D is risky, regulatory approvals take time, and commercial success depends on crop planning and switching costs.

Maintaining Pro-Competitive Incentives to Cross-License Traits

Perhaps more important is that innovation in ag-biotech depends on maintaining a field of rivals, each with strong pro-competitive incentives to collaborate to form new combined (i.e., “stacked”) trait profiles. Farmers benefit most when there are competing stacks to choose from. About 60% of all stacks on the market in 2009 were the result of joint venture cross-licensing collaborations across firms. And the traits innovated by Dow, DuPont, Monsanto, and Bayer account for over 80% of traits in those stacks. That these companies are important innovators is apparent in GM Crop Database data for genetic corn, soybean and cotton “events” approved in the U.S. From 1991-2014, for example, the four companies account for a significant proportion of innovation in important genetic events.

Competition maximizes the potential for numerous collaborations. It also minimizes incentives to refuse to license, to impose discriminatory restrictions in technology licensing agreements, or to tacitly “agree” not to compete. Such agreements could range from deciding which firms specialize in certain crops or traits, to devising market “rules,” such as cross-licensing terms and conditions. All of this points to the importance of maintaining multiple, parallel R&D pipelines, a notion that was central to the EU’s decision in Dow-DuPont.

Remedies or Not? Preserving Innovation Competition

The DOJ has permitted two major ag-biotech mergers in the last decade, Monsanto’s mergers with DeKalb (corn) and Delta & Pine Land (cotton). In crafting remedies in both cases, the DOJ recognized the importance of innovation markets by fashioning remedies that focused on licensing or divesting patented technologies. The proposed mergers of Dow-DuPont and Monsanto-Bayer appear to be a different animal. They would reduce an already small field of large, integrated competitors, raise competitive concerns that have more breadth and complexity than in previous mergers, and are superimposed on growing evidence that transgenic technology has come at a higher and higher a price.Add to this the fact that a viable buyer of any divestiture R&D asset would be difficult to find outside the Big 6. Such a buyer would need to be national, if not global, in scale and scope in order to compete effectively

Add to this the fact that a viable buyer of any divestiture R&D asset would be difficult to find outside the Big 6. Such a buyer would need to be national, if not global, in scale and scope in order to compete effectively post-merger. Lack of scale and scope in R&D, financing, marketing, and distribution would necessitate cobbling together a package of assets to create and potentially prop up a national competitor. While the EU managed to pull this off, it is unclear whether the fact pattern in the U.S. would support a similar outcome. What we do know is that past mergers in the food and agriculture space have squeezed growers and consumers. Unless adequately addressed, these mega-deals stand to squeeze them even more.

Levi A. Russell is Assistant Professor, Agricultural & Applied Economics, University of Georgia and a blogger at Farmer Hayek.

Though concentration seems to be an increasingly popular metric for discussing antitrust policy (a backward move in my opinion, given the theoretical work by Harold Demsetz and others many years ago in this area), contestability is still the standard for evaluating antitrust issues from an economic standpoint. Contestability theory, most closely associated with William Baumol, rests on three primary principles. A market is perfectly contestable if 1) new entrants are not at a cost disadvantage to incumbents, 2) there are no barriers to entry or exit, and 3) there are no sunk costs. In this post, I discuss these conditions in relation to recent mergers and acquisitions in the agricultural chemical and biotech industry.

Contestability is rightly understood as a spectrum. While no industry is perfectly contestable, we expect that markets in which barriers to entry or exit are low, sunk costs are low, and new entrants can easily produce at similar cost to incumbents would be more innovative and that prices would be closer to marginal costs than in other industries. Certainly the agricultural chemical and biotech space does not appear to be very contestable, given the conditions above. There are significant R&D costs associated with the creation of new chemistries and new seed traits. The production and distribution of these products are likely to be characterized by significant economies of scale. Thus, the three conditions listed above are not met, and indeed the industry seems to be characterized by very low contestability. We would expect, then, that these mergers and acquisitions would drive up the prices of the companies’ products, leading to higher monopoly profits. Indeed, one study conducted at Texas A&M University finds that, as a result of the Bayer-Monsanto acquisition and DuPont/Pioneer merger with Dow, corn, soybean, and cotton prices will rise by an estimated 2.3%, 1.9%, and 18.2%, respectively.

These estimates are certainly concerning, especially given the current state of the agricultural economy. As the authors of the Texas A&M study point out, these estimates provide a justification for antitrust authorities to examine the merger and acquisition cases further. However, our dependence on the contestability concept as it pertains to the real world should also be scrutinized. To do so, we can examine other industries in which, according to the standard model of contestability, we would expect to find high barriers to entry or exit, significant sunk costs, and significant cost disadvantages for incumbents.

This chart, assembled by the American Enterprise Institute using data from the Bureau of Labor Statistics, shows the changes in prices of several consumer goods and services from 1996 to 2016, compared with CPI inflation. Industries in which there are high barriers to entry or exit, significant sunk costs, and significant cost disadvantages for new entrants such as automobiles, wireless service, and TVs have seen their prices plummet relative to inflation over the 20 year period. There has also been significant product innovation in these industries over the time period.

Disallowing mergers or acquisitions that will create synergies that lead to further innovation or lower cost is not an improvement in economic efficiency. The transgenic seeds created by some of these companies have allowed farmers to use less-toxic pesticides, providing both private and public benefits. Thus, the higher prices projected by the A&M study might be justified on efficiency grounds. The R&D performed by these firms has led to new pesticide chemistries that have allowed farmers to deal with changes in the behavior of insect populations and will likely allow them to handle issues of pesticide resistance in plants and insects in the future.

What does the empirical evidence on trends in prices and the value of these agricultural firms’ innovations described above imply about contestability and its relation to antitrust enforcement? Contestability should be understood not as a static concept, but as a dynamic one. Competition, more broadly, is the constant striving to outdo competitors and to capture economic profit, not a set of conditions used to analyze a market via a snapshot in time. A proper understanding of competition as a dynamic concept leads us to the following conclusion: for a market to be contestable such that incumbents are incentivized to behave in a competitive manner, the cost advantages and barriers to entry or exit enjoyed by incumbents must be equal to or less than an entrepreneur’s expectation of economic profit associated with entry.  Thus, a commitment to property rights by antitrust courts and avoidance of excessive licensure, intellectual property, and economic regulation by the legislative and executive branches is sufficient from an economic perspective to ensure a reasonable degree of contestability in markets.

In my next post I will discuss a source of disruptive technology that will likely provide some competitive pressure on the firms in these mergers and acquisitions in the near future.