The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited, the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”
This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.
Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:
…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.
The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.
Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:
Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.
This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.
It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).
In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.
The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.
Depending on whom you ask, complexity theory is everything from a revolutionary paradigm to a lazy buzzword. What would it mean to apply it in the context of antitrust and would it, in fact, be useful?
Given its numerous applications, scholars have proposed several definitions of complexity theory, invoking different kinds of complexity. According to one, complexity theory is concerned with the study of complex adaptive systems (CAS)—that is, networks that consist of many diverse, interdependent parts. A CAS may adapt and change, for example, in response to past experience.
That does not sound too strange as a general description either of the economy as a whole or of markets in particular, with consumers, firms, and potential entrants among the numerous moving parts. At the same time, this approach contrasts with orthodox economic theory—specifically, with the game-theory models that rule antitrust debates and that prize simplicity and reductionism.
As both a competition economist and a history buff, my primary point of reference for complexity theory is a scholarly debate among Bronze Age scholars. Sound obscure? Bear with me.
The collapse of several flourishing Mediterranean civilizations in the 12th century B.C. (Mycenae and Egypt, to name only two) puzzles historians as much as today’s economists are stumped by the question of whether any particular merger will raise prices. Both questions encounter difficulties in gathering sufficient data for empirical analysis (the lack of counterfactuals and foresight in one case, and 3,000 years of decay in the other), forcing a recourse to theory and possibility results.
Earlier Bronze Age scholarship blamed the “Sea Peoples,” invaders of unknown origin (possibly Sicily or Sardinia), for the destruction of several thriving cities and states. The primary source for this thesis was statements attributed to the Egyptian pharaoh of the time. More recent research, while acknowledging the role of the Sea Peoples, but has gone to lengths to point out that, in many cases, we simply don’t know. Alternative explanations (famine, disease, systems collapse) are individually unconvincing as alternative explanations, but might each have contributed to the end of various Bronze Age civilizations.
Complexity theory was brought into this discussion with some caution. While acknowledging the theory’s potential usefulness, Eric Cline writes:
We may just be applying a scientific (or possibly pseudoscientific) term to a situation in which there is insufficient knowledge to draw firm conclusions. It sounds nice, but does it really advance our understanding? Is it more than just a fancy way to state a fairly obvious fact?
In a review of Cline’s book, archaeologist Guy D. Middleton agreed that the application of complexity theory might be “useful” but also “obvious.” Similarly, in the context of antitrust, I think complexity theory may serve as a useful framework to understand uncertainty in the marketplace.
Thinking of a market as a CAS can help to illustrate the uncertainty behind every decision. For example, a formal economic model with a clear (at least, to economists) equilibrium outcome might predict that a certain merger will give firms the incentive and ability to reduce spending on research and development. But the lens of complexity theory allows us to better understand why we might still be wrong, or why we are right, but for the wrong reasons.
We can accept that decisions that are relevant and observable to antitrust practitioners (such as price and production decisions) can be driven by things that are small and unobservable. For example, a manager who ultimately calls the shots on R&D budgets for an airplane manufacturer might go to a trade fair and become fascinated by a cool robot that a particular shipyard presented. This might have been the key push that prompted her to finance an unlikely robotics project proposed by her head engineer.
Her firm is, indeed, part of a complex system—one that includes the individual purchase decisions of consumers, customer feedback, reports from salespeople in the field, news from science and business journalists about the next big thing, and impressions at trade fairs and exhibitions. These all coalesce in the manager’s head and influence simple decisions about her R&D budget. But I have yet to see a merger-review decision that predicted effects on innovation from peeking into managers’ minds in such a way.
This little story might be a far-fetched example of the Butterfly Effect, perhaps the most familiar concept from complexity theory. Just as the flaps of a butterfly’s wings might cause a storm on the other side of the world, the shipyard’s earlier decision to invest in a robotic manufacturing technology resulted in our fictitious aircraft manufacturer’s decision to invest more in R&D than we might have predicted with our traditional tools.
Indeed, it is easy to think of other small events that can have consequences leading to price changes that are relevant in the antitrust arena. Remember the cargo ship Ever Given, which blocked the Suez Canal in March 2021? One reason mentioned for its distress were unusually strong winds (whether a butterfly was to blame, I don’t know) pushing the highly stacked containers like a sail. The disruption to supply chains was felt in various markets across Europe.
In my opinion, one benefit of admitting this complexity is that it can make ex post evaluation more common in antitrust. Indeed, some researchers are doing great work on this. Enforcers are understandably hesitant to admit that they might get it wrong sometimes, but I believe that we can acknowledge that we will not ultimately know whether merged firms will, say, invest more or less in innovation. Complexity theory tells us that, even if our best and most appropriate model is wrong, the world is not random. It is just very hard to understand and hinges on things that are neither straightforward to observe, nor easy to correctly gauge ex ante.
Turning back to the Bronze Age, scholars have an easier time observing that a certain city was destroyed and abandoned at some point in time than they do in correctly naming the culprit (the Sea Peoples, a rival power, an earthquake?) The appeal of complexity theory is not just that it lifts a scholar’s burden to name one or a few predominant explanations, but that it grants confidence that the decision itself arose out of a complex system: the big and small effects that factors such as famine, trade, weather, and fortune may have had on the city’s ability to defend itself against attack, and the individual-but-interrelated decisions of a city’s citizens to stay or leave following a catastrophe.
Similarly, for antitrust experts, it is easier to observe a price increase following a merger than to correctly guess its reason. Where economists differ from archaeologists and classicists is that they don’t just study the past. They have to continue exploring the present and future. Imagine that an agency clears a merger that we would have expected not to harm competition, but it turns out, ex post, that it was a bad call. Complexity theory doesn’t just offer excuses for where reality diverged from our prediction. Instead, it can tell us whether our tools were deficient or whether we made an “honest mistake.” As investigations are always costly, it is up to the enforcer (or those setting their budget) to decide whether it makes sense to expand investigations to account for new, complex phenomena (reading the minds of R&D managers will probably remain out of the budget for the foreseeable future).
Finally, economists working on antitrust problems should not see this as belittling their role, but as a welcome frame for their work. Computing diversion ratios or modeling a complex market as a straightforward set of equations might still be the best we can do. A model that is right on average gets us closer to the right answer and is certainly preferred to having no clue what’s going on. Where we don’t have precedent to guide us, we have to resort to models that may be wrong, despite getting everything right that was under our control.
A few things that Petit and Schrepel call for are comfortably established in the economist’s toolkit. They might not, however, always be put to use where they should. Notably, there are feedback loops in dynamic models. Even in static models, it is possible to show how a change in one variable has direct and indirect (second order) effects on an outcome. The typical merger investigation is concerned with short-term effects, perhaps those materializing over the three to five years following a merger. These short-term effects may be relatively easy to approximate in a simple model. Granted, Petit and Schrepel’s article adopts a wide understanding of antitrust—including pro-competitive market regulation—but this seems like an important caveat, nonetheless.
In conclusion, complexity theory is something economists and lawyers who study markets should learn more about. It’s a fascinating research paradigm and a framework in which one can make sense of small and large causes having sometimes unpredictable effects. For antitrust practitioners, it can advance our understanding of why our predictions can fail when the tools and approaches that we use are limited. My hope is that understanding complexity will increase openness to ex-post valuation and the expectations toward antitrust enforcement (and its limits). At the same time, it is still an (economic) question of costs and benefits as to whether further complications in an antitrust investigation are worth it.
 A fascinating introduction that balances approachability and source work is YouTube’s Extra History series on the Bronze Age collapse.
A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)
The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.
And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.
Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.
This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.
The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:
What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?
This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.
Do Monopoly Profits Always Exceed Joint Duopoly Profits?
Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.
The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:
I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.
Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:
Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.
We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:
The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.
Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?
I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.
First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.
For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.
Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either.
Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.
For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.
In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.
If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.
If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.
The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:
Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.
If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).
Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.
Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.
Many Reasons for Mergers
But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.
If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.
Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law.
Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.
An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.
In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.
Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns.
From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.
Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.
Monopsony Requires Studying Output
Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)
In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.
In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.
Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.
To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.
Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.
The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.
How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output.
In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.
In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.
This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.
What Assumptions Make the Difference Between Monopoly and Monopsony?
Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?
There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.
The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.
Welcome to the FTC UMC Roundup, our new weekly update of news and events relating to antitrust and, more specifically, to the Federal Trade Commission’s (FTC) newfound interest in “revitalizing” the field. Each week we will bring you a brief recap of the week that was and a preview of the week to come. All with a bit of commentary and news of interest to regular readers of Truth on the Market mixed in.
This week’s headline? Of course it’s that Alvaro Bedoya has been confirmed as the FTC’s fifth commissioner—notably breaking the commission’s 2-2 tie between Democrats and Republicans and giving FTC Chair Lina Khan the majority she has been lacking. Politico and Gibson Dunn both offer some thoughts on what to expect next—though none of the predictions are surprising: more aggressive merger review and litigation; UMC rulemakings on a range of topics, including labor, right-to-repair, and pharmaceuticals; and privacy-related consumer protection. The real question is how quickly and aggressively the FTC will implement this agenda. Will we see a flurry of rulemakings in the next week, or will they be rolled out over a period of months or years? Will the FTC risk major litigation questions with a “go big or go home” attitude, or will it take a more incrementalist approach to boiling the frog?
Questions about the climate at the FTC continue following release of the Office of Personnel Management’s (OPM) Federal Employee Viewpoint Survey. Sen. Roger Wicker (R-Miss.) wants to know what has caused staff satisfaction at the agency to fall precipitously. And former senior FTC staffer Eileen Harrington issued a stern rebuke of the agency at this week’s open meeting, saying of the relationship between leadership and staff that: “The FTC is not a failed agency but it’s on the road to becoming one. This is a crisis.”
A little further afield, the 5th U.S. Circuit Court of Appealsissued an opinion this week in a case involving SEC administrative-law judges that took broad issue with them on delegation, due process, and “take care” grounds. It may come as a surprise that this has led to much overwroughtconsternation that the opinion would dismantle the administrative state. But given that it is often the case that the SEC and FTC face similar constitutional issues (recall that Kokesh v. SEC was the precursor to AMG Capital), the 5th Circuit case could portend future problems for FTC adjudication. Add this to the queue with the Supreme Court’s pending review of whether federal district courts can consider constitutional challenges to an agency’s structure. The court was already scheduled to consider this question with respect to the FTC this next term in Axon, and agreed this week to hear a similar SEC-focused case next term as well.
Some Navel-Gazing News!
Congratulations to recent University of Michigan Law School graduate Kacyn Fujii, winner of our New Voices competition for contributions to our recent symposium on FTC UMC Rulemaking (hey, this post is actually part of that symposium, as well!). Kacyn’s contribution looked at the statutory basis for FTC UMC rulemaking authority and evaluated the use of such authority as a way to address problematic use of non-compete clauses.
And, one for the academics (and others who enjoy writing academic articles): you might be interested in this call for proposals for a research roundtable on Market Structuring Regulation that the International Center for Law & Economics will host in September. If you are interested in writing on topics that include conglomerate business models, market-structuring regulation, vertical integration, or other topics relating to the regulation and economics of contemporary markets, we hope to hear from you!
Federal Trade Commission (FTC) Chair Lina Khan missed the mark once again in her May 6 speech on merger policy, delivered at the annual meeting of the International Competition Network (ICN). At a time when the FTC and U.S. Justice Department (DOJ) are presumably evaluating responses to the agencies’ “request for information” on possible merger-guideline revisions (see here, for example), Khan’s recent remarks suggest a predetermination that merger policy must be “toughened” significantly to disincentivize a larger portion of mergers than under present guidance. A brief discussion of Khan’s substantively flawed remarks follows.
Khan’s remarks begin with a favorable reference to the tendentious statement from President Joe Biden’s executive order on competition that “broad government inaction has allowed far too many markets to become uncompetitive, with consolidation and concentration now widespread across our economy, resulting in higher prices, lower wages, declining entrepreneurship, growing inequality, and a less vibrant democracy.” The claim that “government inaction” has enabled increased market concentration and reduced competition has been shown to be inaccurate, and therefore cannot serve as a defensible justification for a substantive change in antitrust policy. Accordingly, Khan’s statement that the executive order “underscores a deep mandate for change and a commitment to creating the enabling environment for reform” rests on foundations of sand.
Khan then shifts her narrative to a consideration of merger policy, stating:
Merger investigations invite us to make a set of predictive assessments, and for decades we have relied on models that generally assumed markets are self-correcting and that erroneous enforcement is more costly than erroneous non-enforcement. Both the experience of the U.S. antitrust agencies and a growing set of empirical research is showing that these assumptions appear to have been at odds with market realities.
Khan argues, without explanation, that “the guidelines must better account for certain features of digital markets—including zero-price dynamics, the competitive significance of data, and the network externalities that can swiftly lead markets to tip.” She fails to make any showing that consumer welfare has been harmed by mergers involving digital markets, or that the “zero-price” feature is somehow troublesome. Moreover, the reference to “data” as being particularly significant to antitrust analysis appears to ignore research (see here) indicating there is an insufficient basis for having an antitrust presumption involving big data, and that big data (like R&D) may be associated with innovation, which enhances competitive vibrancy.
Khan also fails to note that network externalities are beneficial; when users are added to a digital platform, the platform’s value to other users increases (see here, for example). What’s more (see here), “gateways and multihoming can dissipate any monopoly power enjoyed by large networks[,] … provid[ing] another reason” why network effects may not raise competitive problems. In addition, the implicit notion that “tipping” is a particular problem is belied by the ability of new competitors to “knock off” supposed entrenched digital monopolists (think, for example, of Yahoo being displaced by Google, and Myspace being displaced by Facebook). Finally, a bit of regulatory humility is in order. Given the huge amount of consumer surplus generated by digital platforms (see here, for example), enforcers should be particularly cautious about avoiding more aggressive merger (and antitrust in general) policies that could detract from, rather than enhance, welfare.
Khan argues that guidelines drafters should “incorporate new learning” embodied in “empirical research [that] has shown that labor markets are highly concentrated” and a “U.S. Treasury [report] recently estimating that a lack of competition may be costing workers up to 20% of their wages.” Unfortunately for Khan’s argument, these claims have been convincingly debunked (see here) in a new study by former FTC economist Julie Carlson (see here). As Carlson carefully explains, labor markets are not highly concentrated and labor-market power is largely due to market frictions (such as occupational licensing), rather than concentration. In a similar vein, a recent article by Richard Epstein stresses that heightened antitrust enforcement in labor markets would involve “high administrative and compliance costs to deal with a largely nonexistent threat.” Epstein points out:
[T]raditional forms of antitrust analysis can perfectly deal with labor markets. … What is truly needed is a close examination of the other impediments to labor, including the full range of anticompetitive laws dealing with minimum wage, overtime, family leave, anti-discrimination, and the panoply of labor union protections, where the gains to deregulation should be both immediate and large.
[W]e are looking to sharpen our insights on non-horizontal mergers, including deals that might be described as ecosystem-driven, concentric, or conglomerate. While the U.S. antitrust agencies energetically grappled with some of these dynamics during the era of industrial-era conglomerates in the 1960s and 70s, we must update that thinking for the current economy. We must examine how a range of strategies and effects, including extension strategies and portfolio effects, may warrant enforcement action.
Khan’s statement on non-horizontal mergers once again is fatally flawed.
With regard to vertical mergers (not specifically mentioned by Khan), the FTC abruptly withdrew, without explanation, its approval of the carefully crafted 2020 vertical-merger guidelines. That action offends the rule of law, creating unwarranted and costly business-sector confusion. Khan’s lack of specific reference to vertical mergers does nothing to solve this problem.
With regard to other nonhorizontal mergers, there is no sound economic basis to oppose mergers involving unrelated products. Threatening to do so would have no procompetitive rationale and would threaten to reduce welfare by preventing the potential realization of efficiencies. In a 2020 OECD paper drafted principally by DOJ and FTC economists, the U.S. government meticulously assessed the case for challenging such mergers and rejected it on economic grounds. The OECD paper is noteworthy in its entirely negative assessment of 1960s and 1970s conglomerate cases which Khan implicitly praises in suggesting they merely should be “updated” to deal with the current economy (citations omitted):
Today, the United States is firmly committed to the core values that antitrust law protect competition, efficiency, and consumer welfare rather than individual competitors. During the ten-year period from 1965 to 1975, however, the Agencies challenged several mergers of unrelated products under theories that were antithetical to those values. The “entrenchment” doctrine, in particular, condemned mergers if they strengthened an already dominant firm through greater efficiencies, or gave the acquired firm access to a broader line of products or greater financial resources, thereby making life harder for smaller rivals. This approach is no longer viewed as valid under U.S. law or economic theory. …
These cases stimulated a critical examination, and ultimate rejection, of the theory by legal and economic scholars and the Agencies. In their Antitrust Law treatise, Phillip Areeda and Donald Turner showed that to condemn conglomerate mergers because they might enable the merged firm to capture cost savings and other efficiencies, thus giving it a competitive advantage over other firms, is contrary to sound antitrust policy, because cost savings are socially desirable. It is now recognized that efficiency and aggressive competition benefit consumers, even if rivals that fail to offer an equally “good deal” suffer loss of sales or market share. Mergers are one means by which firms can improve their ability to compete. It would be illogical, then, to prohibit mergers because they facilitate efficiency or innovation in production. Unless a merger creates or enhances market power or facilitates its exercise through the elimination of competition—in which case it is prohibited under Section 7—it will not harm, and more likely will benefit, consumers.
Given the well-reasoned rejection of conglomerate theories by leading antitrust scholars and modern jurisprudence, it would be highly wasteful for the FTC and DOJ to consider covering purely conglomerate (nonhorizontal and nonvertical) mergers in new guidelines. Absent new legislation, challenges of such mergers could be expected to fail in court. Regrettably, Khan appears oblivious to that reality.
Khan’s speech ends with a hat tip to internationalism and the ICN:
The U.S., of course, is far from alone in seeing the need for a course correction, and in certain regards our reforms may bring us in closer alignment with other jurisdictions. Given that we are here at ICN, it is worth considering how we, as an international community, can or should react to the shifting consensus.
Antitrust laws have been adopted worldwide, in large part at the urging of the United States (see here). They remain, however, national laws. One would hope that the United States, which in the past was the world leader in developing antitrust economics and enforcement policy, would continue to seek to retain this role, rather than merely emulate other jurisdictions to join an “international community” consensus. Regrettably, this does not appear to be the case. (Indeed, European Commissioner for Competition Margrethe Vestager made specific reference to a “coordinated approach” and convergence between U.S. and European antitrust norms in a widely heralded October 2021 speech at the annual Fordham Antitrust Conference in New York. And Vestager specifically touted European ex ante regulation as well as enforcement in a May 5 ICN speech that emphasized multinational antitrust convergence.)
Lina Khan’s recent ICN speech on merger policy sends all the wrong signals on merger guidelines revisions. It strongly hints that new guidelines will embody pre-conceived interventionist notions at odds with sound economics. By calling for a dramatically new direction in merger policy, it interjects uncertainty into merger planning. Due to its interventionist bent, Khan’s remarks, combined with prior statements by U.S. Assistant Attorney General Jonathan Kanter (see here) may further serve to deter potentially welfare-enhancing consolidations. Whether the federal courts will be willing to defer to a drastically different approach to mergers by the agencies (one at odds with several decades of a careful evolutionary approach, rooted in consumer welfare-oriented economics) is, of course, another story. Stay tuned.
A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.
It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:
How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?
Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).
When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.
As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.
The Shaky Foundations of Attention Markets Theory
Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.
First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).
There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:
This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”
Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:
But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.
The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.
The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.
None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.
First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.
This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.
This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.
Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.
This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.
Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:
An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.
In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.
In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.
The Bait and Switch: Qualitative Indicia
These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:
Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method.
Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.
This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”
This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.
A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences.
There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching.
The Way Forward
The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.
As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.
Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.
Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:
The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.
Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.
In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.
U.S. antitrust policy seeks to promote vigorous marketplace competition in order to enhance consumer welfare. For more than four decades, mainstream antitrust enforcers have taken their cue from the U.S. Supreme Court’s statement in Reiter v. Sonotone (1979) that antitrust is “a consumer welfare prescription.” Recent suggestions (see here and here) by new Biden administration Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) leadership that antitrust should promote goals apart from consumer welfare have yet to be embodied in actual agency actions, and they have not been tested by the courts. (Given Supreme Court case law, judicial abandonment of the consumer welfare standard appears unlikely, unless new legislation that displaces it is enacted.)
Assuming that the consumer welfare paradigm retains its primacy in U.S. antitrust, how do the goals of antitrust match up with those of national security? Consistent with federal government pronouncements, the “basic objective of U.S. national security policy is to preserve and enhance the security of the United States and its fundamental values and institutions.” Properly applied, antitrust can retain its consumer welfare focus in a manner consistent with national security interests. Indeed, sound antitrust and national-security policies generally go hand-in-hand. The FTC and the DOJ should keep that in mind in formulating their antitrust policies (spoiler alert: they sometimes have failed to do so).
At first blush, it would seem odd that enlightened consumer-welfare-oriented antitrust enforcement and national-security policy would be in tension. After all, enlightened antitrust enforcement is concerned with targeting transactions that harmfully reduce output and undermine innovation, such as hard-core collusion and courses of conduct that inefficiently exclude competition and weaken marketplace competition. U.S. national security would seem to be promoted (or, at least, not harmed) by antitrust enforcement directed at supporting stronger, more vibrant American markets.
This initial instinct is correct, if antitrust-enforcement policy indeed reflects economically sound, consumer-welfare-centric principles. But are there examples where antitrust enforcement falls short and thereby is at odds with national security? An evaluation of three areas of interaction between the two American policy interests is instructive.
The degree of congruence between national security and appropriate consumer welfare-enhancing antitrust enforcement is illustrated by a brief discussion of:
the intellectual property-antitrust interface, with a focus on patent licensing; and
proposed federal antitrust legislation.
The first topic presents an example of clear consistency between consumer-welfare-centric antitrust and national defense. In contrast, the second topic demonstrates that antitrust prosecutions (and policies) that inappropriately weaken intellectual-property protections are inconsistent with national defense interests. The second topic does not manifest a tension between antitrust and national security; rather, it illustrates a tension between national security and unsound antitrust enforcement. In a related vein, the third topic demonstrates how a change in the antitrust statutes that would undermine the consumer welfare paradigm would also threaten U.S. national security.
The consistency between antitrust goals and national security is relatively strong and straightforward in the field of defense-industry-related mergers and joint ventures. The FTC and DOJ traditionally have worked closely with the U.S. Defense Department (DOD) to promote competition and consumer welfare in evaluating business transactions that affect national defense needs.
The DOD has long supported policies to prevent overreliance on a single supplier for critical industrial-defense needs. Such a posture is consistent with the antitrust goal of preventing mergers to monopoly that reduce competition, raise prices, and diminish quality by creating or entrenching a dominant firm. As then-FTC Commissioner William Kovacic commented about an FTC settlement that permitted the United Launch Alliance (an American spacecraft launch service provider established in 2006 as a joint venture between Lockheed Martin and Boeing), “[i]n reviewing defense industry mergers, competition authorities and the DOD generally should apply a presumption that favors the maintenance of at least two suppliers for every weapon system or subsystem.”
Antitrust enforcers have, however, worked with DOD to allow the only two remaining suppliers of a defense-related product or service to combine their operations, subject to appropriate safeguards, when presented with scale economy and quality rationales that advanced national-security interests (see here).
Antitrust enforcers have also consulted and found common cause with DOD in opposing anticompetitive mergers that have national-security overtones. For example, antitrust enforcement actions targeting vertical defense-sector mergers that threaten anticompetitive input foreclosure or facilitate anticompetitive information exchanges are in line with the national-security goal of preserving vibrant markets that offer the federal government competitive, high-quality, innovative, and reasonably priced purchase options for its defense needs.
The FTC’s recent success in convincing Lockheed Martin to drop its proposed acquisition of Aerojet Rocketdyne holdings fits into this category. (I express no view on the merits of this matter; I merely cite it as an example of FTC-DOD cooperation in considering a merger challenge.) In its February 2022 press release announcing the abandonment of this merger, the FTC stated that “[t]he acquisition would have eliminated the country’s last independent supplier of key missile propulsion inputs and given Lockheed the ability to cut off its competitors’ access to these critical components.” The FTC also emphasized the full consistency between its enforcement action and national-security interests:
Simply put, the deal would have resulted in higher prices and diminished quality and innovation for programs that are critical to national security. The FTC’s enforcement action in this matter dovetails with the DoD report released this week recommending stronger merger oversight of the highly concentrated defense industrial base.
Shifts in government IP-antitrust patent-licensing policy perspectives
Standard setting through standard setting organizations (SSOs) has been a particularly important means of spawning valuable benchmarks (standards) that have enabled new patent-backed technologies to drive innovation and enable mass distribution of new high-tech products, such as smartphones. The licensing of patents that cover and make possible valuable standards—“standard-essential patents” or SEPs—has played a crucial role in bringing to market these products and encouraging follow-on innovations that have driven fast-paced welfare-enhancing product and process quality improvements.
Licensing, cross-licensing, or otherwise transferring intellectual property (hereinafter “licensing”) can facilitate integration of the licensed property with complementary factors of production. This integration can lead to more efficient exploitation of the intellectual property, benefiting consumers through the reduction of costs and the introduction of new products. Such arrangements increase the value of intellectual property to consumers and owners. Licensing can allow an innovator to capture returns from its investment in making and developing an invention through royalty payments from those that practice its invention, thus providing an incentive to invest in innovative efforts. …
[L]imitations on intellectual property licenses may serve procompetitive ends by allowing the licensor to exploit its property as efficiently and effectively as possible. These various forms of exclusivity can be used to give a licensee an incentive to invest in the commercialization and distribution of products embodying the licensed intellectual property and to develop additional applications for the licensed property. The restrictions may do so, for example, by protecting the licensee against free riding on the licensee’s investments by other licensees or by the licensor. They may also increase the licensor’s incentive to license, for example, by protecting the licensor from competition in the licensor’s own technology in a market niche that it prefers to keep to itself.
Unfortunately, however, FTC and DOJ antitrust policies over the last 15 years have too often belied this generally favorable view of licensing practices with respect to SEPs. (See generally here, here, and here). Notably, the antitrust agencies have at various times taken policy postures and enforcement actions indicating that SEP holders may face antitrust challenges if:
they fail to license all comers, including competitors, on fair, reasonable, and nondiscriminatory (FRAND) terms; and
seek to obtain injunctions against infringers.
In addition, antitrust policy officials (see 2011 FTC Report) have described FRAND price terms as cabined by the difference between the licensing rates for the first (included in the standard) and second (not included in the standard) best competing patented technologies available prior to the adoption of a standard. This pricing measure—based on the “incremental difference” between first and second-best technologies—has been described as necessary to prevent SEP holders from deriving artificial “monopoly rents” that reflect the market power conferred by a standard. (But see then FTC-Commissioner Joshua Wright’s 2013 essay to the contrary, based on the economics of incomplete contracts.)
This approach to SEPs undervalues them, harming the economy. Limitations on seeking injunctions (which are a classic property-right remedy) encourages opportunistic patent infringements and artificially disfavors SEP holders in bargaining over licensing terms with technology implementers, thereby reducing the value of SEPs. SEP holders are further disadvantaged by the presumption that they must license all comers. They also are harmed by the implication that they must be limited to a relatively low hypothetical “ex ante” licensing rate—a rate that totally fails to take into account the substantial economic welfare value that will accrue to the economy due to their contribution to the standard. Considered individually and as a whole, these negative factors discourage innovators from participating in standardization, to the detriment of standards quality. Lower-quality standards translate into inferior standardized produces and processes and reduced innovation.
Recognizing this problem, in 2018 DOJ, Assistant Attorney General for Antitrust Makan Delrahim announced a “New Madison Approach” (NMA) to SEP licensing, which recognized:
antitrust remedies are inappropriate for patent-licensing disputes between SEP-holders and implementers of a standard;
SSOs should not allow collective actions by standard-implementers to disfavor patent holders;
SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents by seeking injunctions; and
unilateral and unconditional decisions not to license a patent should be per se legal. (See, for example, here and here.)
Acceptance of the NMA would have counter-acted the economically harmful degradation of SEPs stemming from prior government policies.
Regrettably, antitrust-enforcement-agency statements during the last year effectively have rejected the NMA. Most recently, in December 2021, the DOJ issued for public comment a Draft Policy Statement on Licensing Negotiations and Remedies, SEPs, which displaces a 2019 statement that had been in line with the NMA. Unless the FTC and Biden DOJ rethink their new position and decide instead to support the NMA, the anti-innovation approach to SEPs will once again prevail, with unfortunate consequences for American innovation.
The “weaker patents” implications of the draft policy statement would also prove detrimental to national security, as explained in a comment on the statement by a group of leading law, economics, and business scholars (including Nobel Laureate Vernon Smith) convened by the International Center for Law & Economics:
China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights. …
Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.
A Center for Security and International Studies submission on the draft policy statement (signed by a former deputy secretary of the DOD, as well as former directors of the U.S. Patent and Trademark Office and the National Institute of Standards and Technology) also raised China-related national-security concerns:
[T]he largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.
Furthermore, in a more general vein, leading innovation economist David Teece also noted the negative national-security implications in his submission on the draft policy statement:
The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation. … Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.
That’s not all. In its public comment warning against precipitous finalization of the draft policy statement, the Innovation Alliance noted that, in recent years, major foreign jurisdictions have rejected the notion that SEP holders should be deprived the opportunity to seek injunctions. The Innovation Alliance opined in detail on the China national-security issues (footnotes omitted):
[T]he proposed shift in policy will undermine the confidence and clarity necessary to incentivize investments in important and risky research and development while simultaneously giving foreign competitors who do not rely on patents to drive investment in key technologies, like China, a distinct advantage. …
The draft policy statement … would devalue SEPs, and undermine the ability of U.S. firms to invest in the research and development needed to maintain global leadership in 5G and other critical technologies.
Without robust American investments, China—which has clear aspirations to control and lead in critical standards and technologies that are essential to our national security—will be left without any competition. Since 2015, President Xi has declared “whoever controls the standards controls the world.” China has rolled out the “China Standards 2035” plan and has outspent the United States by approximately $24 billion in wireless communications infrastructure, while China’s five-year economic plan calls for $400 billion in 5G-related investment.
Simply put, the draft policy statement will give an edge to China in the standards race because, without injunctions, American companies will lose the incentive to invest in the research and development needed to lead in standards setting. Chinese companies, on the other hand, will continue to race forward, funded primarily not by license fees, but by the focused investment of the Chinese government. …
Public hearings are necessary to take into full account the uncertainty of issuing yet another policy on this subject in such a short time period.
A key part of those hearings and further discussions must be the national security implications of a further shift in patent enforceability policy. Our future safety depends on continued U.S. leadership in areas like 5G and artificial intelligence. Policies that undermine the enforceability of patent rights disincentivize the substantial private sector investment necessary for research and development in these areas. Without that investment, development of these key technologies will begin elsewhere—likely China. Before any policy is accepted, key national-security stakeholders in the U.S. government should be asked for their official input.
These are not the only comments that raised the negative national-security ramifications of the draft policy statement (see here and here). For example, current Republican and Democratic senators, prior International Trade Commissioners, and former top DOJ and FTC officials also noted concerns. What’s more, the Patent Protection Society of China, which represents leading Chinese corporate implementers, filed a rather nonanalytic submission in favor of the draft statement. As one leading patent-licensing lawyer explains: “UC Berkley Law Professor Mark Cohen, whose distinguished government service includes serving as the USPTO representative in China, submitted a thoughtful comment explaining how the draft Policy Statement plays into China’s industrial and strategic interests.”
Finally, by weakening patent protection, the draft policy statement is at odds with the 2021 National Security Commission on Artificial Intelligence Report, which called for the United States to “[d]evelop and implement national IP policies to incentivize, expand, and protect emerging technologies[,]” in response to Chinese “leveraging and exploiting intellectual property (IP) policies as a critical tool within its national strategies for emerging technologies.”
In sum, adoption of the draft policy statement would raise antitrust risks, weaken key property rights protections for SEPs, and undercut U.S. technological innovation efforts vis-à-vis China, thereby undermining U.S. national security.
FTC v. Qualcomm: Misguided enforcement and national security
U.S. national-security interests have been threatened by more than just the recent SEP policy pronouncements. In filing a January 2017 antitrust suit (at the very end of the Obama administration) against Qualcomm’s patent-licensing practices, the FTC (by a partisan 2-1 vote) ignored the economic efficiencies that underpinned this highly successful American technology company’s practices. Had the suit succeeded, U.S. innovation in a critically important technology area would have needlessly suffered, with China as a major beneficiary. A recent Federalist Society Regulatory Transparency Project report on the New Madison Approach underscored the broad policy implications of FTC V. Qualcomm (citations deleted):
The FTC’s Qualcomm complaint reflected the anti-SEP bias present during the Obama administration. If it had been successful, the FTC’s prosecution would have seriously undermined the freedom of the company to engage in efficient licensing of its SEPs.
Qualcomm is perhaps the world’s leading wireless technology innovator. It has developed, patented, and licensed key technologies that power smartphones and other wireless devices, and continues to do so. Many of Qualcomm’s key patents are SEPs subject to FRAND, directed to communications standards adopted by wireless devices makers. Qualcomm also makes computer processors and chips embodied in cutting edge wireless devices. Thanks in large part to Qualcomm technology, those devices have improved dramatically over the last decade, offering consumers a vast array of new services at a lower and lower price, when quality is factored in. Qualcomm thus is the epitome of a high tech American success story that has greatly benefited consumers.
Qualcomm: (1) sells its chips to “downstream” original equipment manufacturers (OEMs, such as Samsung and Apple), on the condition that the OEMs obtain licenses to Qualcomm SEPs; and (2) refuses to license its FRAND-encumbered SEPs to rival chip makers, while allowing those rivals to create and sell chips embodying Qualcomm SEP technologies to those OEMS that have entered a licensing agreement with Qualcomm.
The FTC’s 2017 antitrust complaint, filed in federal district court in San Francisco, charged that Qualcomm’s “no license, no chips” policy allegedly “forced” OEM cell phone manufacturers to pay elevated royalties on products that use a competitor’s baseband processors. The FTC deemed this an illegal “anticompetitive tax” on the use of rivals’ processors, since phone manufacturers “could not run the risk” of declining licenses and thus losing all access to Qualcomm’s processors (which would be needed to sell phones on important cellular networks). The FTC also argued that Qualcomm’s refusal to license its rivals despite its SEP FRAND commitment violated the antitrust laws. Finally, the FTC asserted that a 2011-2016 Qualcomm exclusive dealing contract with Apple (in exchange for reduced patent royalties) had excluded business opportunities for Qualcomm competitors.
The federal district court held for the FTC. It ordered that Qualcomm end these supposedly anticompetitive practices and renegotiate its many contracts. [Among the beneficiaries of new pro-implementer contract terms would have been a leading Chinese licensee of Qualcomm’s, Huawei, the huge Chinese telecommunications company that has been accused by the U.S. government of using technological “back doors” to spy on the United States.]
Qualcomm appealed, and in August 2020 a panel of the Ninth Circuit Court of Appeals reversed the district court, holding for Qualcomm. Some of the key points underlying this holding were: (1) Qualcomm had no antitrust duty to deal with competitors, consistent with established Supreme Court precedent (a very narrow exception to this precedent did not apply); (2) Qualcomm’s rates were chip supplier neutral because all OEMs paid royalties, not just rivals’ customers; (3) the lower court failed to show how the “no license, no chips” policy harmed Qualcomm’s competitors; and (4) Qualcomm’s agreements with Apple did not have the effect of substantially foreclosing the market to competitors. The Ninth Circuit as a whole rejected the FTC’s “en banc” appeal for review of the panel decision.
The appellate decision in Qualcomm largely supports pillar four of the NMA, that unilateral and unconditional decisions not to license a patent should be deemed legal under the antitrust laws. More generally, the decision evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support. The FTC and the lower court’s findings of “harm” had been essentially speculative and anecdotal at best. They had ignored the “big picture” that the markets in which Qualcomm operates had seen vigorous competition and the conferral of enormous and growing welfare benefits on consumers, year-by-year. The lower court and the FTC had also turned a deaf ear to a legitimate efficiency-related business rationale that explained Qualcomm’s “no license, no chips” policy – a fully justifiable desire to obtain a fair return on Qualcomm’s patented technology.
Qualcomm is well reasoned, and in line with sound modern antitrust precedent, but it is only one holding. The extent to which this case’s reasoning proves influential in other courts may in part depend on the policies advanced by DOJ and the FTC going forward. Thus, a preliminary examination of the Biden administration’s emerging patent-antitrust policy is warranted. [Subsequent discussion shows that the Biden administration apparently has rejected pro-consumer policies embodied in the 9th U.S. Circuit’s Qualcomm decision and in the NMA.]
Although the 9th Circuit did not comment on them, national-security-policy concerns weighed powerfully against the FTC v. Qualcomm suit. In a July 2019 Statement of Interest (SOI) filed with the circuit court, DOJ cogently set forth the antitrust flaws in the district court’s decision favoring the FTC. Furthermore, the SOI also explained that “the public interest” favored a stay of the district court holding, due to national-security concerns (described in some detail in statements by the departments of Defense and Energy, appended to the SOI):
[T]he public interest also takes account of national security concerns. Winter v. NRDC, 555 U.S. 7, 23-24 (2008). This case presents such concerns. In the view of the Executive Branch, diminishment of Qualcomm’s competitiveness in 5G innovation and standard-setting would significantly impact U.S. national security. A251-54 (CFIUS); LD ¶¶10-16 (Department of Defense); ED ¶¶9-10 (Department of Energy). Qualcomm is a trusted supplier of mission-critical products and services to the Department of Defense and the Department of Energy. LD ¶¶5-8; ED ¶¶8-9. Accordingly, the Department of Defense “is seriously concerned that any detrimental impact on Qualcomm’s position as global leader would adversely affect its ability to support national security.” LD ¶16.
The [district] court’s remedy [requiring the renegotiation of Qualcomm’s licensing contracts] is intended to deprive, and risks depriving, Qualcomm of substantial licensing revenue that could otherwise fund time-sensitive R&D and that Qualcomm cannot recover later if it prevails. See, e.g., Op. 227-28. To be sure, if Qualcomm ultimately prevails, vacatur of the injunction will limit the severity of Qualcomm’s revenue loss and the consequent impairment of its ability to perform functions critical to national security. The Department of Defense “firmly believes,” however, “that any measure that inappropriately limits Qualcomm’s technological leadership, ability to invest in [R&D], and market competitiveness, even in the short term, could harm national security. The risks to national security include the disruption of [the Department’s] supply chain and unsure U.S. leadership in 5G.” LD ¶3. Consequently, the public interest necessitates a stay pending this Court’s resolution of the merits. In these rare circumstances, the interest in preventing even a risk to national security—“an urgent objective of the highest order”—presents reason enough not to enforce the remedy immediately. Int’l Refugee Assistance Project, 137 S. Ct. at 2088 (internal quotations omitted).
Not all national-security arguments against antitrust enforcement may be well-grounded, of course. The key point is that the interests of national security and consumer-welfare-centric antitrust are fully aligned when antitrust suits would inefficiently undermine the competitive vigor of a firm or firms that play a major role in supporting U.S. national-security interests. Such was the case in FTC v. Qualcomm. More generally, heightened antitrust scrutiny of efficient patent-licensing practices (as threatened by the Biden administration) would tend to diminish innovation by U.S. patentees, particularly in areas covered by standards that are key to leading global technologies. Such a diminution in innovation will tend to weaken American advantages in important industry sectors that are vital to U.S. national-security interests.
Proposed Federal Antitrust Legislation
Proposed federal antitrust legislation being considered by Congress (see here, here, and here for informed critiques) would prescriptively restrict certain large technology companies’ business transactions. If enacted, such legislation would thereby preclude case-specific analysis of potential transaction-specific efficiencies, thereby undermining the consumer welfare standard at the heart of current sound and principled antitrust enforcement. The legislation would also be at odds with our national-security interests, as a recent U.S. Chamber of Commerce paper explains:
Congress is considering new antitrust legislation which, perversely, would weaken leading U.S. technology companies by crafting special purpose regulations under the guise of antitrust to prohibit those firms from engaging in business conduct that is widely acceptable when engaged in by rival competitors.
A series of legislative proposals – some of which already have been approved by relevant Congressional committees – would, among other things: dismantle these companies; prohibit them from engaging in significant new acquisitions or investments; require them to disclose sensitive user data and sensitive IP and trade secrets to competitors, including those that are foreign-owned and controlled; facilitate foreign influence in the United States; and compromise cybersecurity. These bills would fundamentally undermine American security interests while exempting from scrutiny Chinese and other foreign firms that do not meet arbitrary user and market capitalization thresholds specified in the legislation. …
The United States has never used legislation to punish success. In many industries, scale is important and has resulted in significant gains for the American economy, including small businesses. U.S. competition law promotes the interests of consumers, not competitors. It should not be used to pick winners and losers in the market or to manage competitive outcomes to benefit select competitors. Aggressive competition benefits consumers and society, for example by pushing down prices, disrupting existing business models, and introducing innovative products and services.
If enacted, the legislative proposals would drag the United States down in an unfolding global technological competition. Companies captured by the legislation would be required to compete against integrated foreign rivals with one hand tied behind their backs. Those firms that are the strongest drivers of U.S. innovation in AI, quantum computing, and other strategic technologies would be hamstrung or even broken apart, while foreign and state-backed producers of these same technologies would remain unscathed and seize the opportunity to increase market share, both in the U.S. and globally. …
Instead of warping antitrust law to punish a discrete group of American companies, the U.S. government should focus instead on vigorous enforcement of current law and on vocally opposing and effectively countering foreign regimes that deploy competition law and other legal and regulatory methods as industrial policy tools to unfairly target U.S. companies. The U.S. should avoid self-inflicted wounds to our competitiveness and national security that would result from turning antitrust into a weapon against dynamic and successful U.S. firms.
Consistent with this analysis, former Obama administration Defense Secretary Leon Panetta and former Trump administration Director of National Intelligence Dan Coats argued in a letter to U.S. House leadership (see here) that “imposing severe restrictions solely on U.S. giants will pave the way for a tech landscape dominated by China — echoing a position voiced by the Big Tech companies themselves.”
The national-security arguments against current antitrust legislative proposals, like the critiques of the unfounded FTC v. Qualcomm case, represent an alignment between sound antitrust policy and national-security analysis. Unfounded antitrust attacks on efficient business practices by large firms that help maintain U.S. technological leadership in key areas undermine both principled antitrust and national security.
Enlightened antitrust enforcement, centered on consumer welfare, can and should be read in a manner that is harmonious with national-security interests.
The cooperation between U.S. federal antitrust enforcers and the DOD in assessing defense-industry mergers and joint ventures is, generally speaking, an example of successful harmonization. This success reflects the fact that antitrust enforcers carry out their reviews of those transactions with an eye toward accommodating efficiencies that advance defense goals without sacrificing consumer welfare. Close antitrust-agency consultation with DOD is key to that approach.
Unfortunately, federal enforcement directed toward efficient intellectual-property licensing, as manifested in the Qualcomm case, reflects a disharmony between antitrust and national security. This disharmony could be eliminated if DOJ and the FTC adopted a dynamic view of intellectual property and the substantial economic-welfare benefits that flow from restrictive patent-licensing transactions.
In sum, a dynamic analysis reveals that consumer welfare is enhanced, not harmed, by not subjecting such licensing arrangements to antitrust threat. A more permissive approach to licensing is thus consistent with principled antitrust and with the national security interest of protecting and promoting strong American intellectual property (and, in particular, patent) protection. The DOJ and the FTC should keep this in mind and make appropriate changes to their IP-antitrust policies forthwith.
Finally, proposed federal antitrust legislation would bring about statutory changes that would simultaneously displace consumer welfare considerations and undercut national security interests. As such, national security is supported by rejecting unsound legislation, in order to keep in place consumer-welfare-based antitrust enforcement.
The acceptance and implementation of due-process standards confer a variety of welfare benefits on society. As Christopher Yoo, Thomas Fetzer, Shan Jiang, and Yong Huang explain, strong procedural due-process protections promote: (1) compliance with basic norms of impartiality; (2) greater accuracy of decisions; (3) stronger economic growth; (4) increased respect for government; (5) better compliance with the law; (6) better control of the bureaucracy; (7) restraints on the influence of special-interest groups; and (8) reduced corruption.
Recognizing these benefits (and consistent with the long Anglo-American tradition of recognizing due-process rights that dates back to Magna Carta), the U.S. government (USG) has long been active in advancing the adoption of due-process principles by competition-law authorities around the world, working particularly through the Organisation for Economic Co-operation and Development (OECD) and the International Competition Network (ICN). More generally, due process may be seen as an aspect of the rule of law, which is as important in antitrust as in other legal areas.
The USG has supported OECD Competition Committee work on due-process safeguards which began in 2010, and which culminated in the OECD ministers’ October 2021 adoption of a “Recommendation on Transparency and Procedural Fairness in Competition Law Enforcement.” This recommendation calls for: (1) competition and predictability in competition-law enforcement; (2) independence, impartiality, and professionalism of competition authorities; (3) non-discrimination, proportionality, and consistency in the treatment of parties subject to scrutiny; (4) timeliness in handling cases; (5) meaningful engagement with parties (including parties’ right to respond and be heard); (6) protection of confidential and privileged information; (7) impartial judicial review of enforcement decisions; and (8) periodic review of policies, rules, procedures, and guidelines, to ensure that they are aligned with the preceding seven principles.
The USG has also worked through the International Competition Network (ICN) to generate support for the acceptance of due-process principles by ICN member competition agencies and their governments. In describing ICN due-process initiatives, James Rill and Jana Seidl have explained that “[t]he current challenge is to determine the extent to which the ICN, as a voluntary organization, can or should establish mechanisms to evaluate implementation of … [due process] norms by its members and even non-members.”
In 2019, the ICN announced creation of a Framework for Competition Agency Procedures (CAP), open to both ICN and non-ICN national and multinational (most prominently, the EU’s Directorate General for Competition) competition agencies. The CAP essentially embodied the principles of a June 2018 U.S. Justice Department (DOJ) framework proposal. A September 2021 CAP Report (footnotes omitted) issued at an ICN steering-group meeting noted that the CAP had 73 members, and summarized the history and goals of the CAP as follows:
The ICN CAP is a non-binding, opt-in framework. It makes use of the ICN infrastructure to maximize visibility and impact while minimizing the administrative burden for participants that operate in different legal regimes and enforcement systems with different resource constraints. The ICN CAP promotes agreement among competition agencies worldwide on fundamental procedural norms. The Multilateral Framework for Procedures project, launched by the US Department of Justice in 2018, was the starting point for what is now the ICN CAP.
The ICN CAP rests on two pillars: the first pillar is a catalogue of fundamental, consensus principles for fair and effective agency procedures that reflect the broad consensus within the global competition community. The principles address: non-discrimination, transparency, notice of investigations, timely resolution, confidentiality protections, conflicts of interest, opportunity to defend, representation, written decisions, and judicial review.
The second pillar of the ICN CAP consists of two processes: the “CAP Cooperation Process,” which facilitates a dialogue between participating agencies, and the “CAP Review Process,” which enhances transparency about the rules governing participants’ investigation and enforcement procedures.
The ICN CAP template is the practical implementation tool for the CAP. Participants each submit CAP templates, outlining how their agencies adhere to each of the CAP principles. The templates allow participants to share and explain important features of their systems, including links and other references to related materials such as legislation, rules, regulations, and guidelines. The CAP templates are a useful resource for agencies to consult when they would like to gain a quick overview of other agencies’ procedures, benchmark with peer agencies, and develop new processes and procedures.
Through the two pillars and the template, the CAP provides a framework for agencies to affirm the importance of the CAP principles, to confer with other jurisdictions, and to illustrate how their regulations and guidelines adhere to those principles.
In short, the overarching goal of the ICN CAP is to give agencies a “nudge” to implement due-process principles by encouraging consultation with peer CAP members and exposing to public view agencies’ actual due-process record. The extent to which agencies will prove willing to strengthen their commitment to due process because of the CAP, or even join the CAP, remains to be seen. (China’s competition agency, the State Administration for Market Regulation (SAMR), has not joined the ICN CAP.)
Antitrust, Due Process, and the Rule of Law at the DOJ and the FTC
Now that the ICN CAP and OECD recommendation are in place, it is important that the DOJ and Federal Trade Commission (FTC), as long-time international promoters of due process, lead by example in adhering to all of those multinational instruments’ principles. A failure to do so would, in addition to having negative welfare consequences for affected parties (and U.S. economic welfare), undermine USG international due-process advocacy. Less effective advocacy efforts could, of course, impose additional costs on American businesses operating overseas, by subjecting them to more procedurally defective foreign antitrust prosecutions than otherwise.
With those considerations in mind, let us briefly examine the current status of due-process protections afforded by the FTC and DOJ. Although traditionally robust procedural safeguards remain strong overall, some worrisome developments during the first year of the Biden administration merit highlighting. Those developments implicate classic procedural issues and some broader rule of law concerns. (This commentary does not examine due-process and rule-of-law issues associated with U.S. antitrust enforcement at the state level, a topic that warrants scrutiny as well.)
New FTC leadership has taken several actions that have unfortunate due-process and rule-of-law implications (many of them through highly partisan 3-2 commission votes featuring strong dissents).
Consider the HSR Act, a Congressional compromise that gave enforcers advance notice of deals and parties the benefit of repose. HSR review [at the FTC] now faces death by a thousand cuts. We have hit month nine of a “temporary” and “brief” suspension of early termination. Letters are sent to parties when their waiting periods expire, warning them to close at their own risk. Is the investigation ongoing? Is there a set amount of time the parties should wait? No one knows! The new prior approval policy will flip the burden of proof and capture many deals below statutory thresholds. And sprawling investigations covering non-competition concerns exceed our Clayton Act authority.
These policy changes impose a gratuitous tax on merger activity – anticompetitive and procompetitive alike. There are costs to interfering with the market for corporate control, especially as we attempt to rebound from the pandemic. If new leadership wants the HSR Act rewritten, they should persuade Congress to amend it rather than taking matters into their own hands.
Uncertainty and delay surrounding merger proposals and new merger-review processes that appear to flaunt tension with statutory commands are FTC “innovations” that are in obvious tension with due-process guarantees.
FTC rulemaking initiatives have due-process and rule-of-law problems. As Commissioner Wilson noted (footnotes omitted), “[t]he [FTC] majority changed our rules of practice to limit stakeholder input and consolidate rulemaking power in the chair’s office. In Commissioner [Noah] Phillips’ words, these changes facilitate more rules, but not better ones.” Lack of stakeholder input offends due process. Even more serious, however, is the fact that far-reaching FTC competition rules are being planned (see the December 2021 FTC Statement of Regulatory Priorities). FTC competition rulemaking is likely beyond its statutory authority and would fail a cost-benefit analysis (see here). Moreover, even if competition rules survived, they would offend the rule of law (see here) by “lead[ing] to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency.”
The FTC’s July 2021 withdrawal of its 2015 “Statement of Enforcement Principles Regarding ‘Unfair Methods of Competition’ [UMC] Under Section 5 of the FTC Act” likewise undercuts the rule of law (see here). The 2015 Statement had tended to increase predictability in enforcement by tying the FTC’s exercise of its UMC authority to well-understood antitrust rule-of-reason principles and the generally accepted consumer welfare standard. By withdrawing the statement (over the dissents of Commissioners Wilson and Phillips) without promulgating a new policy, the FTC majority reduced enforcement guidance and generated greater legal uncertainty. The notion that the FTC may apply the UMC concept in an unbounded fashion lacks legal principle and threatens to chill innovative and welfare-enhancing business conduct.
Finally, the FTC’s abrupt September 2021 withdrawal of its approval of jointly issued 2020 DOJ-FTC Vertical Merger Guidelines (again over a dissent by Commissioners Wilson and Phillips), offends the rule of law in three ways. As Commissioner Wilson explains, it engenders confusion as to FTC policies regarding vertical-merger analysis going forward; it appears to reflect flawed economic thinking regarding vertical integration (which may in turn lead to enforcement error); and it creates a potential tension between DOJ and FTC approaches to vertical acquisitions (the third concern may disappear if and when DOJ and FTC agree to new merger guidelines).
As of now, the Biden administration DOJ has not taken as many actions that implicate rule-of-law and due-process concerns. Two recent initiatives with significant rule-of-law implications, however, deserve mention.
First, on Dec. 6, 2021, DOJ suddenly withdrew a 2019 policy statement on “Licensing Negotiations and Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments.” In so doing, DOJ simultaneously released a new draft policy statement on the same topic, and requested public comments. The timing of the withdrawal was peculiar, since the U.S. Patent and Trademark Office (PTO) and the National Institute of Standards and Technology (NIST)—who had joined with DOJ in the 2019 policy statement (which itself had replaced a 2013 policy statement)—did not yet have new Senate-confirmed leadership and were apparently not involved in the withdrawal. What’s more, DOJ originally requested that public comments be filed by the beginning of January, a ridiculously short amount of time for such a complex topic. (It later relented and established an early February deadline.) More serious than these procedural irregularities, however, are two new features of the Draft Policy Statement: (1) its delineation of a suggested private-negotiation framework for patent licensing; and (2) its assertion that standard essential patent (SEP) holders essentially forfeit the right to seek an injunction. These provisions, though not binding, may have a coercive effect on some private negotiators, and they problematically insert the government into matters that are appropriately the province of private businesses and the courts. Such an involvement by government enforcers in private negotiations, which treats one category of patents (SEPs) less favorably than others, raises rule-of-law questions.
Second, in January 2018, DOJ and the FTC jointly issued a “Request for Information on Merger Enforcement” [RIF] that contemplated the issuance of new merger guidelines (see my recent analysis, here). The RIF was chock full of numerous queries to prospective commentators that generally reflected a merger-skeptical tone. This suggests a predisposition to challenge mergers that, if embodied in guidelines language, could discourage some (or perhaps many) non-problematic consolidations from being proposed. New merger guidelines that impliedly were anti-merger would be a departure from previous guidelines, which stated in neutral fashion that they would consider both the anticompetitive risks and procompetitive benefits of mergers being reviewed. A second major concern is that the enforcement agencies might produce long and detailed guidelines containing all or most of the many theories of competitive harm found in the RIF. Overly complex guidelines would not produce any true guidance to private parties, inconsistent with the principle that individuals should be informed what the law is. Such guidelines also would give enforcers greater flexibility to selectively pick and choose theories best suited to block particular mergers. As such, the guidelines might be viewed by judges as justifications for arbitrary, rather than principled, enforcement, at odds with the rule of law.
It is to be hoped that the FTC and DOJ will take into account this international dimension in assessing the merits of antitrust “reforms” now under consideration. New enforcement policies that sow delay and uncertainty undermine the rule of law and are inconsistent with due-process principles. The consumer welfare harm that may flow from such deficient policies may be substantial. The agency missteps identified above should be rectified and new polices that would weaken due-process protections and undermine the rule of law should be avoided.
President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.
Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.
Life Sciences: USTR Campaigns Against Intellectual-Property Rights
In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.
Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future.
Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.
This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.
Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition
In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.
It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.
The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention.
Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible.
If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively.
Wireless Communications: DOJ Takes Aim at Standard-Essential Patents
Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries. It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.
In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.
Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.
From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.
This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.
The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.
Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.
Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.
The RFI Reflects the False Premise that Competition is Declining in the United States
The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:
Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.
This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:
Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.
Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002.
Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.
In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.
The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship
The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)
What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”
Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).
The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.
For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”
The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.
Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”
Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)
The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”
This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:
The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.
By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).
These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.
New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI
The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.
As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):
[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.
It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.
Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”
Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.
One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.
In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.
New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies
The DOJ and FTC could, of course, acknowledge the problem of administrability and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.
As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.
In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”
New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.
Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.
Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):
When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.
Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.
New Guidelines, if Issued, Should Give Due Credit to Efficiencies
Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.
Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.
Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)
Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.
Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”
She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.
The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.
Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.
If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:
Eschew references to dated and discredited case law;
Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).
Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.
Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.
Background: The Grudging Acceptance of Merger Efficiencies
Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”
In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.
Two Enforcement Matters with ‘Efficiencies Offense’ Overtones
Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)
FTC v. Facebook
The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:
[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.
The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”
UK Competition and Markets Authority (CMA) v. Facebook
The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . . These platforms license the use of Giphy for its users.”
The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.
As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”
Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:
Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.
There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”
Are the Facebook Cases Merely Random Straws in the Wind?
It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.
In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.
Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.
The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.
Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”
Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.
One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)
While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.
As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.