Archives For error costs

The Biden administration’s antitrust reign of error continues apace. The U.S. Justice Department’s (DOJ) Antitrust Division has indicated in recent months that criminal prosecutions may be forthcoming under Section 2 of the Sherman Antitrust Act, but refuses to provide any guidance regarding enforcement criteria.

Earlier this month, Deputy Assistant Attorney General Richard Powers stated that “there’s ample case law out there to help inform those who have concerns or questions” regarding Section 2 criminal enforcement, conveniently ignoring the fact that criminal Section 2 cases have not been brought in almost half a century. Needless to say, those ancient Section 2 cases (which are relatively few in number) antedate the modern era of economic reasoning in antitrust analysis. What’s more, unlike Section 1 price-fixing and market-division precedents, they yield no clear rule as to what constitutes criminal unilateral behavior. Thus, DOJ’s suggestion that old cases be consulted for guidance is disingenuous at best. 

It follows that DOJ criminal-monopolization prosecutions would be sheer folly. They would spawn substantial confusion and uncertainty and disincentivize dynamic economic growth.

Aggressive unilateral business conduct is a key driver of the competitive process. It brings about “creative destruction” that transforms markets, generates innovation, and thereby drives economic growth. As such, one wants to be particularly careful before condemning such conduct on grounds that it is anticompetitive. Accordingly, error costs here are particularly high and damaging to economic prosperity.

Moreover, error costs in assessing unilateral conduct are more likely than in assessing joint conduct, because it is very hard to distinguish between procompetitive and anticompetitive single-firm conduct, as DOJ’s 2008 Report on Single Firm Conduct Under Section 2 explains (citations omitted):

Courts and commentators have long recognized the difficulty of determining what means of acquiring and maintaining monopoly power should be prohibited as improper. Although many different kinds of conduct have been found to violate section 2, “[d]efining the contours of this element … has been one of the most vexing questions in antitrust law.” As Judge Easterbrook observes, “Aggressive, competitive conduct by any firm, even one with market power, is beneficial to consumers. Courts should prize and encourage it. Aggressive, exclusionary conduct is deleterious to consumers, and courts should condemn it. The big problem lies in this: competitive and exclusionary conduct look alike.”

The problem is not simply one that demands drawing fine lines separating different categories of conduct; often the same conduct can both generate efficiencies and exclude competitors. Judicial experience and advances in economic thinking have demonstrated the potential procompetitive benefits of a wide variety of practices that were once viewed with suspicion when engaged in by firms with substantial market power. Exclusive dealing, for example, may be used to encourage beneficial investment by the parties while also making it more difficult for competitors to distribute their products.

If DOJ does choose to bring a Section 2 criminal case soon, would it target one of the major digital platforms? Notably, a U.S. House Judiciary Committee letter recently called on DOJ to launch a criminal investigation of Amazon (see here). Also, current Federal Trade Commission (FTC) Chair Lina Khan launched her academic career with an article focusing on Amazon’s “predatory pricing” and attacking the consumer welfare standard (see here).

Khan’s “analysis” has been totally discredited. As a trenchant scholarly article by Timothy Muris and Jonathan Nuechterlein explains:

[DOJ’s criminal Section 2 prosecution of A&P, begun in 1944,] bear[s] an eerie resemblance to attacks today on leading online innovators. Increasingly integrated and efficient retailers—first A&P, then “big box” brick-and-mortar stores, and now online retailers—have challenged traditional retail models by offering consumers lower prices and greater convenience. For decades, critics across the political spectrum have reacted to such disruption by urging Congress, the courts, and the enforcement agencies to stop these American success stories by revising antitrust doctrine to protect small businesses rather than the interests of consumers. Using antitrust law to punish pro-competitive behavior makes no more sense today than it did when the government attacked A&P for cutting consumers too good a deal on groceries. 

Before bringing criminal Section 2 charges against Amazon, or any other “dominant” firm, DOJ leaders should read and absorb the sobering Muris and Nuechterlein assessment. 

Finally, not only would DOJ Section 2 criminal prosecutions represent bad public policy—they would also undermine the rule of law. In a very thoughtful 2017 speech, then-Acting Assistant Attorney General for Antitrust Andrew Finch succinctly summarized the importance of the rule of law in antitrust enforcement:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

Bringing criminal monopolization cases now, after a half-century of inaction, would be antithetical to the stability and continuity that underlie the rule of law. What’s worse, the failure to provide prosecutorial guidance would be squarely at odds with concerns of notice and reliance that inform the rule of law. As such, a DOJ decision to target firms for Section 2 criminal charges would offend the rule of law (and, sadly, follow the FTC ‘s recent example of flouting the rule of law, see here and here).

In sum, the case against criminal Section 2 prosecutions is overwhelming. At a time when DOJ is facing difficulties winning “slam dunk” criminal Section 1  prosecutions targeting facially anticompetitive joint conduct (see here, here, and here), the notion that it would criminally pursue unilateral conduct that may generate substantial efficiencies is ludicrous. Hopefully, DOJ leadership will come to its senses and drop any and all plans to bring criminal Section 2 cases.

[The 12th entry in our FTC UMC Rulemaking symposium is from guest contributor Steven J. Cernak, a partner in the antitrust and competition practice of BonaLaw in Detroit, Michigan. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission (FTC) has been in the antitrust-enforcement business for more than 100 years. Its new leadership is considering some of the biggest changes ever in its enforcement methods. Instead of a detailed analysis of each case on its own merits, some FTC leaders now want its unelected bureaucrats to write competition rules for the entire economy under its power to stop unfair methods of competition. Such a move would be bad for competition and the economy—and for the FTC itself.

The FTC enforces the antitrust laws through its statutory authority to police unfair methods of competition (UMC). Like all antitrust challengers, the FTC now must conduct a detailed analysis of the specific actions of particular competitors. Whether the FTC decides to challenge actions initially in its own administrative courts or in federal courts, eventually it must convince independent judges that the challenged conduct really does harm competition. When finalized, those decisions set precedent. Future parties can argue their particular details are different or otherwise require a different outcome. As a result, the antitrust laws slowly evolve in ways understandable to all.

Some members of FTC’s new leadership have argued that the agency should skip the hard work of individual cases and instead issue blanket rules to cover competitive situations across the economy. Since taking over in the new administration, they have taken steps that seem to make it easier for the FTC to issue such broad competition rules. Doing so would be a mistake for several reasons.

First, it is far from clear that Congress gave the FTC the authority to issue such rules. Also, any such grant of quasi-legislative power to this independent agency might be unconstitutional. The FTC already gets to play prosecutor and judge in many cases. Becoming a legislature might be going too far. Other commentators, both in this symposium and elsewhere, have detailed those arguments. But however those arguments shake out, the FTC will need to take the time and resources to fight off the inevitable challenges.

But even if it can, the FTC should not. The case-by-case approach allows for detailed analysis, making it more likely to be correct. If there are any mistakes, they only affect those parties.

If it turns to competition rulemaking, how will the FTC gain the knowledge and develop the wisdom to develop rules that apply across large swaths of the economy for an unlimited time? Will it apply the same rules to companies with 8% and 80% market share? And to companies making software or automobiles or flying passengers across the country? And will it apply those rules today and next year, no matter the innovations that occur in between? The hubris to think that some all-knowing Washington wizards can get all that right, all the time, is staggering.

Yes, there are some general antitrust rules, like price-fixing agreements being illegal because they harm consumers. But those rules were developed by many lawyers, economists, judges, and witnesses through decades of case-by-case analyses and, even today, parties can argue to a court that they don’t apply to their particular facts. A one-size-fits-all rule won’t have even that flexibility.

For example, what if the FTC develops a rule based on, say, an investigation of toilet-bowl manufacturers that all price-fixing, even if the fixed price is reasonable, is automatically illegal. How would such a rigid rule handle, say, a joint license with a single price issued by competing music composers? Or could a single rule that anticipates the very different facts of Trenton Potteriesand Broadcast Musicbe written in a way that is both short enough to be understood but broad enough to anticipate all potential future facts? Perhaps the rule inspired by Trenton Potteries could be adjusted when the Broadcast Music facts become known. But then, that is just back to the detailed, case-by-case, analysis that we have now, except with the FTC rule-makers changing the rules rather than an independent judge.

Any new FTC rules could conflict with the court opinions generated by antitrust cases brought by the U.S. Justice Department’s (DOJ) Antitrust Division, state attorneys general, or private parties. For instance, the FTC and the Division generally divide up the industries that make up the economy based on expertise and experience. Should the competitive rules differ by enforcer? By industry?

As an example, consider, say, a hypothetical automatic-transmission company whose smallest products can be used in light-duty pickup trucks while the bulk of its product line is used in the largest heavy-duty trucks and equipment. Traditionally, the FTC has reviewed antitrust issues in the light-duty industry while the Division has taken heavy-duty. Should the antitrust rules affecting this hypothetical company’s light-duty sales be different than those affecting the heavy-duty sales based solely on the enforcer and not the applicable competitive facts?

Antitrust is a law-enforcement regime with rules that have changed slowly over decades through individual cases, as economic understandings have evolved. It could have been a regulatory regime, but elected officials did not make that choice. Antitrust could be changed now to a regulatory regime. Individual rules could be changed. Such monumental changes, however, should only be made by Congress, as is being debated now, not by three unelected FTC officials.

In the 1970s, the FTC overreached on rules about deceptive marketing and was slapped down by Congress, the courts, and the public. The Washington Post criticized it as “the national nanny.” Its reputation and authority suffered. We did not need a national nanny then. We don’t need one today, hectoring us to follow overbroad, ill-fitting rules designed by insulated “experts” and not subject to review.

The FTC has very important roles to play regarding understanding and protecting competition in the U.S. economy (before even getting to its crucial consumer-protection mission.) Even with potential increases in its budget, the FTC, like all of us, will have limited resources, time, expertise, and reputation. It should not squander any of that on an ill-fated, quixotic, and hubristic effort to tell everyone how to compete. Instead, the FTC should focus on what it does best: challenging the bad actions of bad actors and convincing a court that it got it right. That is how the FTC can best protect America’s consumers, as its (nicely redesigned) website proclaims.

[The ninth entry in our FTC UMC Rulemaking symposium comes from guest contributor Aaron Nielsen of BYU Law. It is the second post we are publishing today; see also this related post from Jonathan M. Barnett of USC Gould School of Law. Like that post, it adapts a paper that will appear as a chapter in the forthcoming book FTC’s Rulemaking Authority, which will be published by Concurrences later this year. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

For obvious reasons, many scholars, lawyers, and policymakers are thinking hard about whether the Federal Trade Commission (FTC) has authority to promulgate substantive “unfair methods of competition” (UMC) regulations. I first approached this issue a couple of years ago when the FTC asked me to present on the agency’s rulemaking powers. For my presentation, I focused on 1973’s National Petroleum Refiners Association v. FTC and, in particular, whether the U.S. Court of Appeals for the D.C. Circuit correctly held that the FTC has authority to promulgate such rules. I ventured that relying on National Petroleum Refiners would present “litigation risk” for the FTC because the method of statutory interpretation used by the D.C. Circuit is out of step with how courts read statutes today. Richard Pierce, who presented at the same event, was even more blunt:

Let me just express my complete agreement with Aaron’s analysis of the extraordinary fragility of the FTC position that National Petroleum Refiners is going to protect them. I teach National Petroleum Refiners every year. And I teach it as an object lesson in what no court, modern court, would ever do today. The reasoning is, by today’s standards, preposterous.  … [T]he interpretive method that was used in that case was fairly commonly used on the DC Circuit at that time. There is no justice today—not just Gorsuch, but Kagan, Breyer—there is no justice today that would [use that method]. 

That was a fun academic discussion—with emphasis on the word academic. After all, for decades, this issue has only been an academic question because the FTC has not attempted to use such authority. That academic question, however, may soon become a concrete dispute. 

Pierce and others have advanced the anti-National Petroleum Refiners position. Recently, Kacyn H. Fujii has advanced the pro-National Petroleum Refiners position. Should the FTC promulgate a substantive UMC rule, the federal courts will decide which position is right. As that day approaches, many more experts will offer thoughts on this important question. 

Here, however, I want to focus on a different question. What would happen if the FTC can promulgate broad high-profile UMC rules, including new antitrust tests? 

I’ve just posted to SSRN a new essay that addresses that question: “What Happens If the FTC Becomes a Serious Rulemaker?” This essay will be published in the forthcoming book FTC’s Rulemaking Authority. Here is the abstract:

The Federal Trade Commission (FTC) is no one’s idea of a serious rulemaker. To the contrary, the FTC is in many respects a law enforcement agency that operates through litigation and consent decrees. There are understandable reasons for this absence of FTC rulemaking. Not only has Congress imposed heightened procedural obligations on the FTC’s ability to promulgate consumer protection rules, but also it is far from clear that the FTC even has statutory authority to promulgate substantive rules relating to unfair methods of competition (UMC). Yet things may be changing. It appears that the FTC is preparing to begin using rulemaking more aggressively, including for substantive UMC regulations. The FTC’s ability to use rulemaking this way will undoubtedly prompt sharp and important legal challenges.

This short essay, however, considers the question of FTC rulemaking from a different angle: What if the FTC has broad rulemaking authority?  And what if the FTC begins to use that authority for controversial policies? Traditionally, the FTC operates in a case-by-case fashion that attempts to apply familiar principles to the facts of individual matters. Should the FTC begin making broader policy choices through rulemaking, however, it should be prepared for at least three unintended consequences: (i) more ossification, including more judicial challenges and perhaps White House oversight; (ii) more zigzagging policy as new FTC leadership, in response to changes in presidential control, moves to undo what the agency has just done; and (iii) to more often be the target of what has been called “administrative law as blood sport,” by which political actors make it more difficult for the agency to function, for example by delaying the confirmation process.  The upshot would be an agency that could in theory (and sometimes no doubt in fact) regulate more broadly than the FTC does now, but also one with a different character.  In short, the more the FTC becomes a serious rulemaker, the more the FTC will change as an institution.

Here, I will summarize some of the thoughts from my essay. Please read the full essay, however, if you’re looking for citations and a more complete explanation. 

At the outset, my essay is not an attack on rulemaking. There are good reasons to prefer agencies to make policy through rulemaking rather than, say, case-by-case adjudication or threats. In fact, Kristin Hickman and I have written an entire article explaining why rulemaking (generally) should be favored over adjudication. That said, I am concerned about the idea that the FTC has substantive rulemaking authority to promulgate broad UMC rules under Section 5 of the FTC Act. Rulemaking has many advantages, but it does not follow that rulemaking under this very open-ended statute makes sense, especially if the goal is broad policy change. Indeed, if the FTC were to use rulemaking authority for small issues, presumably some of the concerns I sketch out would not apply (though the legal question, of course, still would). 

As I explain in my essay, when agencies attempt to use rulemaking for significant policies—which, not by coincidence, disproportionately tend to be controversial policies—at least three unintended consequences may result: ossification, zigzagging policy, and blood-sport tactics.   

First, ossification. For decades, many administrative law scholars have lamented how ossified the rulemaking process has become. Notice-and-comment rulemaking may not look all that difficult. The process has become challenging, at least for the most significant rules. (There is an empirical dispute about how ossified the process is, but part of that debate may be explained by the nature of the rules at issue; agencies perhaps can promulgate lower-profile rules without much trouble, while struggling with the more significant ones.) Agencies looking to make important policy changes through notice-and-comment rulemaking, for example, often receive mountains of comments from the public. Indeed, agencies may receive millions of comments.  Because agencies have to respond to material comments, rules that prompt that volume of commentary aren’t so easy to do. Likewise, the most consequential rules almost invariably prompt litigation, and as part of so-called “hard look” review, the agency will have to persuade a court that it has considered the important aspects of the problem. Preparing for that sort of review can require a great deal of upfront work. And although its domain does not extend to independent agencies, the Office of Information and Regulatory Affairs (OIRA) also requires agencies to do a great deal of analysis before promulgating the most significant rules. 

If the FTC begins promulgating significant rules, it should be prepared for an ossified process that requires reallocating resources within the agency and engaging in more “admin law” litigation. Because rulemaking can be labor intensive, moreover, the FTC may not be able to pursue as many policies as some no doubt wish. Furthermore, the U.S. Justice Department has concluded that the White House has the authority to subject independent agencies to the OIRA process. If the FTC begins promulgating significant rules—especially regulations of the sort that may be improved by inter-agency coordination and external evaluation, two hallmarks of the OIRA process—the White House may decide that the time has come to put the FTC within OIRA’s tent. Such developments would change how the FTC functions. 

Second, zigzagging policy. It turns out that when agencies use regulatory power for significant policies, agencies sometimes find themselves using that same power to undo those policies when control of the White House shifts. Elsewhere, I’ve written about the Federal Communications Commission and so-called “net neutrality” rules. For decades, the FCC has flip-flopped on this significant issue; when Republicans control the White House, the FCC does one thing, but when Democrats take over, it does something else. Flip-flopping, however, is not limited to the FCC.  As Pierce has put it, “[t]he same analysis applies in each of the hundreds of contexts in which Democrats and Republicans have opposing and uncompromising preferences with respect to policy issues. …” Zigzagging policy is bad for business because it makes it harder to invest, and for that same reason, is bad for consumers who do not gain the benefits of foregone investment. It is also bad for regulators, who must spend time and effort to undo the agency’s own prior actions. To be sure, agencies don’t always flip-flop; indeed, the ossification of the rulemaking process may limit it, at the margins. But especially for the most consequential policies, zigzagging sometimes happens.

Accordingly, if the FTC begins promulgating significant policies through rulemaking, it should expect some zigzagging policy when the White House changes hands. As my essay explains:

In this current age of polarization, regulatory efforts to address divisive issues may not work well because what an agency does under one administration can be undone in the next administration. Thus, the end result may be policy that exists under some administrations but not others. Indeed, the FTC’s recent slew of party-line votes suggests that if the FTC begins using rulemaking for controversial policies, the FTC will look to undo those rules when the political balance flips.  Of course, not all FTC rules will vacillate—there are not enough resources to undo everything, especially as agencies confront new issues. But if the FTC becomes a serious rulemaker, some zigzagging should occur.

Finally, consider “administrative law as blood sport”—an evocative phrase that comes from Thomas McGarity. The idea is that agencies engaged in rulemaking are increasingly subject to political opposition across several dimensions, including “strategies aimed at indirectly disrupting the implementation of regulatory programs by blocking Senate confirmation of new agency leaders, cutting off promised funding for agencies, introducing rifle-shot riders aimed at undoing ongoing agency action, and subjecting agency heads to contentious oversight hearings.” In other words, an opponent of a proposed regulation may try to stop it through the rulemaking process (for example, by filing comment and then going to court), but may also try to stop it outside of the rulemaking process through political means. 

As my essay explains, if the FTC begins using rulemaking for controversial policies, blood-sport tactics presumably will follow. Similarly, the FTC should also expect litigation of a more fundamental character. The U.S. Supreme Court is increasingly wary of independent agencies; to the extent that the FTC begins making significant policy choices without presidential control, the likelihood that the Supreme Court will say “enough” increases. 

In short, if the FTC engages in significant rulemaking, its character will change. No doubt, some proponents of FTC rulemaking would accept that cost, but in assessing FTC rulemaking, it is important to remember unintended consequences, too.

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

[This post is the first in our FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1500-4000 word responses for potential inclusion in the symposium.]

There is widespread interest in the potential tools that the Biden administration’s Federal Trade Commission (FTC) may use to address a range of competition-related and competition-adjacent concerns. A focal point for this interest is the potential that the FTC may use its broad authority to regulate unfair methods of competition (UMC) under Section 5 of the FTC Act to make rules that address a wide range of conduct. This “potential” is expected to become a “likelihood” with confirmation of Alvaro Bedoya, a third Democratic commissioner, expected to occur any day.

This post marks the start of a Truth on the Market symposium that brings together academics, practitioners, and other commentators to discuss issues relating to potential UMC-related rulemaking. Contributions to this symposium will cover a range of topics, including:

  • Constitutional and administrative-law limits on UMC rulemaking: does such rulemaking potentially present “major question” or delegation issues, or other issues under the Administrative Procedure Act (APA)? If so, what is the scope of permissible rulemaking?
  • Substantive issues in UMC rulemaking: costs and benefits to be considered in developing rules, prudential concerns, and similar concerns.
  • Using UMC to address competition-adjacent issues: consideration of how or whether the FTC can use its UMC authority to address firm conduct that is governed by other statutory or regulatory regimes. For instance, firms using copyright law and the Digital Millennium Copyright Act (DMCA) to limit competitors’ ability to alter or repair products, or labor or entry issues that might be governed by licensure or similar laws.

Timing and Structure of the Symposium

Starting tomorrow, one or two contributions to this symposium will be posted each morning. During the first two weeks of the symposium, we will generally try to group posts on similar topics together. When multiple contributions are posted on the same day, they will generally be implicitly or explicitly in dialogue with each other. The first week’s contributions will generally focus on constitutional and administrative law issues relating to UMC rulemaking, while the second week’s contributions will focus on more specific substantive topics. 

Readers are encouraged to engage with these posts through comments. In addition, academics, practitioners, and other antitrust and regulatory commentators are invited to submit additional contributions for inclusion in this symposium. Such contributions may include responses to posts published by others or newly developed ideas. Interested authors should submit pieces for consideration to Gus Hurwitz and Keith Fierro Benson.

This symposium will run through at least Friday, May 6. We do not, however, anticipate, ending or closing it at that time. To the contrary, it is very likely that topics relating to FTC UMC rulemaking will continue to be timely and of interest to our community—we anticipate keeping the symposium running for the foreseeable future, and welcome submissions on an ongoing basis. Readers interested in these topics are encouraged to check in regularly for new posts, including by following the symposium page, the FTC UMC Rulemaking tag, or by subscribing to Truth on the Market for notifications of new posts.

For decades, consumer-welfare enhancement appeared to be a key enforcement goal of competition policy (antitrust, in the U.S. usage) in most jurisdictions:

  • The U.S. Supreme Court famously proclaimed American antitrust law to be a “consumer welfare prescription” in Reiter v. Sonotone Corp. (1979).
  • A study by the current adviser to the European Competition Commission’s chief economist found that that there are “many statements indicating that, seen from the European Commission, modern EU competition policy to a large extent is about protecting consumer welfare.”
  • A comprehensive international survey presented at the 2011 Annual International Competition Network Conference, found that a majority of competition authorities state that “their national [competition] legislation refers either directly or indirectly to consumer welfare,” and that most competition authorities “base their enforcement efforts on the premise that they enlarge consumer welfare.”  

Recently, however, the notion that a consumer welfare standard (CWS) should guide antitrust enforcement has come under attack (see here). In the United States, this movement has been led by populist “neo-Brandeisians” who have “call[ed] instead for enforcement that takes into account firm size, fairness, labor rights, and the protection of smaller enterprises.” (Interestingly, there appear to be more direct and strident published attacks on the CWS from American critics than from European commentators, perhaps reflecting an unspoken European assumption that “ordoliberal” strong government oversight of markets advances the welfare of consumers and society in general.) The neo-Brandeisian critique is badly flawed and should be rejected.

Assuming that the focus on consumer welfare in U.S. antitrust enforcement survives this latest populist challenge, what considerations should inform the design and application of a CWS? Before considering this question, one must confront the context in which it arises—the claim that the U.S. economy has become far less competitive in recent decades and that antitrust enforcement has been ineffective at addressing this problem. After dispatching with this flawed claim, I advance four principles aimed at properly incorporating consumer-welfare considerations into antitrust-enforcement analysis.  

Does the US Suffer from Poor Antitrust Enforcement and Declining Competition?

Antitrust interventionists assert that lax U.S. antitrust enforcement has coincided with a serious decline in competition—a claim deployed to argue that, even if one assumes that promoting consumer welfare remains an overarching goal, U.S. antitrust policy nonetheless requires a course correction. After all, basic price theory indicates that a reduction in market competition raises deadweight loss and reduces consumers’ relative share of total surplus. As such, it might seem to follow that “ramping up antitrust” would lead to more vigorously competitive markets, featuring less deadweight loss and relatively more consumer surplus.

This argument, of course, avoids error cost, rent seeking, and public choice issues that raise serious questions about the welfare effects of more aggressive “invigorated” enforcement (see here, for example). But more fundamentally, the argument is based on two incorrect premises:

  1. That competition has declined; and
  2. That U.S. trustbusters have applied the CWS in a narrow manner ineffective to address competitive problems.

Those premises (which also underlie President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy) do not stand up to scrutiny.

In a recent article in the Stigler Center journal Promarket, Yale University economics professor Fiona Scott-Morton and Yale Law student Leah Samuel accepted those premises in complaining about poor antitrust enforcement and substandard competition (hyperlinks omitted and emphasis in the original):

In recent years, the [CWS] term itself has become the target of vocal criticism in light of mounting evidence that recent enforcement—and what many call the “consumer welfare standard era” of antitrust enforcement—has been a failure. …

This strategy of non-enforcement has harmed markets and consumers. Today we see the evidence of this under-enforcement in a range of macroeconomic measures, studies of markups, as well as in merger post-mortems and studies of anticompetitive behavior that agencies have not pursued. Non-economist observers– journalists, advocates, and lawyers – who have noticed the lack of enforcement and the pernicious results have learned to blame “economics” and the CWS. They are correct that using CWS, as defined and warped by Chicago-era jurists and economists, has been a failure. That kind of enforcement—namely, insufficient enforcement—does not protect competition. But we argue that the “economics” at fault are the corporate-sponsored Chicago School assumptions, which are at best outdated, generally unjustified, and usually incorrect.

While the Chicago School caused the “consumer welfare standard” to become associated with an anti-enforcement philosophy in the legal community, it has never changed its meaning among PhD-trained economists.

To an economist, consumer welfare is a well-defined concept. Price, quality, and innovation are all part of the demand curve and all form the basis for the standard academic definition of consumer welfare. CW is the area under the demand curve and above the quality-adjusted price paid. … Quality-adjusted price represents all the value consumers get from the product less the price they paid, and therefore encapsulates the role of quality of any kind, innovation, and price on the welfare of the consumer.

In my published response to Scott-Morton and Samuel, I summarized recent economic literature that contradicts the “competition is declining” claim. I also demonstrated that antitrust enforcement has been robust and successful, refuting the authors’ claim to the contrary (cross links to economic literature omitted):

There are only two problems with the [authors’] argument. First, it is not clear at all that competition has declined during the reign of this supposedly misused [CWS] concept. Second, the consumer welfare standard has not been misapplied at all. Indeed, as antitrust scholars and enforcement officials have demonstrated … modern antitrust enforcement has not adopted a narrow “Chicago School” view of the world. To the contrary, it has incorporated the more sophisticated analysis the authors advocate, and enforcement initiatives have been vigorous and largely successful. Accordingly, the authors’ call for an adjustment in antitrust enforcement is a solution in search of a non-existent problem.

In short, competitive conditions in U.S. markets are robust and have not been declining. Moreover, U.S. antitrust enforcement has been sophisticated and aggressive, fully attuned to considerations of quality and innovation.

A Suggested Framework for Consumer Welfare Analysis

Although recent claims of “weak” U.S. antitrust enforcement are baseless, they do, nevertheless, raise “front and center” the nature of the CWS. The CWS is a worthwhile concept, but it eludes a precise definition. That is as it should be. In our common law system, fact-specific analyses of particular competitive practices are key to determining whether welfare is or is not being advanced in the case at hand. There is no simple talismanic CWS formula that is readily applicable to diverse cases.

While Scott-Morton argues that the area under the demand curve (consumer surplus) is essentially coincident with the CWS, other leading commentators take account of the interests of producers as well. For example, the leading antitrust treatise writer, Herbert Hovenkamp, suggests thinking about consumer welfare in terms of “maxim[izing] output that is consistent with sustainable competition. Output includes quantity, quality, and improvements in innovation. As an aside, it is worth noting that high output favors suppliers, including labor, as well as consumers because job opportunities increase when output is higher.” (Hovenkamp, Federal Antitrust Policy 102 (6th ed. 2020).)

Federal Trade Commission (FTC) Commissioner Christine Wilson (like Ken Heyer and other scholars) advocates a “total welfare standard” (consumer plus producer surplus). She stresses that it would beneficially:

  1. Make efficiencies more broadly cognizable, capturing cost reductions not passed through in the short run;
  2. Better enable the agencies to consider multi-market effects (whether consumer welfare gains in one market swamp consumer welfare losses in another market); and
  3. Better capture dynamic efficiencies (such as firm-specific efficiencies that are emulated by other “copycat” firms in the market).

Hovenkamp and Wilson point to the fact that efficiency-enhancing business conduct often has positive ramifications for both consumers and producers. As such, a CWS that focuses narrowly on short-term consumer surplus may prompt antitrust challenges to conduct that, properly understood, will prove beneficial to both consumers and producers over time.

With this in mind, I will now suggest four general “framework principles” to inform a CWS analysis that properly accounts for innovation and dynamic factors. These principles are tentative and merely suggestive, intended to prompt a further dialogue on CWS among interested commentators. (Also, many practical details will need to be filled in, based on further analysis.)

  1. Enforcers should consider all effects on consumer welfare in evaluating a transaction. Under the rule of reason, a reduction in surplus to particular defined consumers should not condemn a business practice (merger or non-merger) if other consumers are likely to enjoy accretions to surplus and if aggregate consumer surplus appears unlikely to decline, on net, due to the practice. Surplus need not be quantified—the likely direction of change in surplus is all that is required. In other words, “actual welfare balancing” is not required, consistent with the practical impossibility of quantifying new welfare effects in almost all cases (see, e.g., Hovenkamp, here). This principle is unaffected by market definition—all affected consumers should be assessed, whether they are “in” or “out” of a hypothesized market.
  2. Vertical intellectual-property-licensing contracts should not be subject to antitrust scrutiny unless there is substantial evidence that they are being used to facilitate horizontal collusion. This principle draws on the “New Madison Approach” associated with former Assistant Attorney General for Antitrust Makan Delrahim. It applies to a set of practices that further the interests of both consumers and producers. Vertical IP licensing (particularly patent licensing) “is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumer and producer surplus).” (See here, for example.) The 9th U.S. Circuit Court of Appeals’ refusal to condemn Qualcomm’s patent-licensing contracts (which had been challenged by the FTC) is consistent with this principle; it “evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support.” (See here.)
  3. Furthermore, enforcers should carefully assess the ability of “non-standard” commercial contracts—horizontal and vertical—to overcome market failures, as described by transaction-cost economics (see here, and here, for example). Non-standard contracts may be designed to deal with problems (for instance) of contractual incompleteness and opportunism that stymie efforts to advance new commercial opportunities. To the extent that such contracts create opportunities for transactions that expand or enhance market offerings, they generate new consumer surplus (new or “shifted out” demand curves) and enhance consumer welfare. Thus, they should enjoy a general (though rebuttable) presumption of legality.
  4. Fourth, and most fundamentally, enforcers should take account of cost-benefit analysis, rooted in error-cost considerations, in their enforcement initiatives, in order to further consumer welfare. As I have previously written:

Assuming that one views modern antitrust enforcement as an exercise in consumer welfare maximization, what does that tell us about optimal antitrust enforcement policy design? In order to maximize welfare, enforcers must have an understanding of – and seek to maximize the difference between – the aggregate costs and benefits that are likely to flow from their policies. It therefore follows that cost-benefit analysis should be applied to antitrust enforcement design. Specifically, antitrust enforcers first should ensure that the rules they propagate create net welfare benefits. Next, they should (to the extent possible) seek to calibrate those rules so as to maximize net welfare. (Significantly, Federal Trade Commissioner Josh Wright also has highlighted the merits of utilizing cost-benefit analysis in the work of the FTC.) [Eight specific suggestions for implementing cost-beneficial antitrust evaluations are then put forth in this article.]

Conclusion

One must hope that efforts to eliminate consumer welfare as the focal point of U.S. antitrust will fail. But even if they do, market-oriented commentators should be alert to any efforts to “hijack” the CWS by interventionist market-skeptical scholars. A particular threat may involve efforts to define the CWS as merely involving short-term consumer surplus maximization in narrowly defined markets. Such efforts could, if successful, justify highly interventionist enforcement protocols deployed against a wide variety of efficient (though too often mischaracterized) business practices.

To counter interventionist antitrust proposals, it is important to demonstrate that claims of faltering competition and inadequate antitrust enforcement under current norms simply are inaccurate. Such an effort, though necessary, is not enough.

In order to win the day, it will be important for market mavens to explain that novel business practices aimed at promoting producer surplus tend to increase consumer surplus as well. That is because efficiency-enhancing stratagems (often embodied in restrictive IP-licensing agreements and non-standard contracts) that further innovation and overcome transaction-cost difficulties frequently pave the way for innovation and the dissemination of new technologies throughout the economy. Those effects, in turn, expand and create new market opportunities, yielding huge additions to consumer surplus—accretions that swamp short-term static effects.

Enlightened enforcers should apply enforcement protocols that allow such benefits to be taken into account. They should also focus on the interests of all consumers affected by a practice, not just a narrow subset of targeted potentially “harmed” consumers. Finally, public officials should view their enforcement mission through a cost-benefit lens, which is designed to promote welfare. 

There has been a rapid proliferation of proposals in recent years to closely regulate competition among large digital platforms. The European Union’s Digital Markets Act (DMA, which will become effective in 2023) imposes a variety of data-use, interoperability, and non-self-preferencing obligations on digital “gatekeeper” firms. A host of other regulatory schemes are being considered in Australia, France, Germany, and Japan, among other countries (for example, see here). The United Kingdom has established a Digital Markets Unit “to operationalise the future pro-competition regime for digital markets.” Recently introduced U.S. Senate and House Bills—although touted as “antitrust reform” legislation—effectively amount to “regulation in disguise” of disfavored business activities by very large companies,  including the major digital platforms (see here and here).

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). Without evidence, new regulatory initiatives are put forth as superior to long-established, consumer-based antitrust law enforcement.

The hope that new regulatory tools will somehow “solve” digital market competitive “problems” stems from the untested assumption that established consumer welfare-based antitrust enforcement is “not up to the task.” Untested assumptions, however, are an unsound guide to public policy decisions. Rather, in order to optimize welfare, all proposed government interventions in the economy, including regulation and antitrust, should be subject to decision-theoretic analysis that is designed to minimize the sum of error and decision costs (see here). What might such an analysis reveal?

Wonder no more. In a just-released Mercatus Center Working Paper, Professor Thom Lambert has conducted a decision-theoretic analysis that evaluates the relative merits of U.S. consumer welfare-based antitrust, ex ante regulation, and ongoing agency oversight in addressing the market power of large digital platforms. While explaining that antitrust and its alternatives have their respective costs and benefits, Lambert concludes that antitrust is the welfare-superior approach to dealing with platform competition issues. According to Lambert:

This paper provides a comparative institutional analysis of the leading approaches to addressing the market power of large digital platforms: (1) the traditional US antitrust approach; (2) imposition of ex ante conduct rules such as those in the EU’s Digital Markets Act and several bills recently advanced by the Judiciary Committee of the US House of Representatives; and (3) ongoing agency oversight, exemplified by the UK’s newly established “Digital Markets Unit.” After identifying the advantages and disadvantages of each approach, this paper examines how they might play out in the context of digital platforms. It first examines whether antitrust is too slow and indeterminate to tackle market power concerns arising from digital platforms. It next considers possible error costs resulting from the most prominent proposed conduct rules. It then shows how three features of the agency oversight model—its broad focus, political susceptibility, and perpetual control—render it particularly vulnerable to rent-seeking efforts and agency capture. The paper concludes that antitrust’s downsides (relative indeterminacy and slowness) are likely to be less significant than those of ex ante conduct rules (large error costs resulting from high informational requirements) and ongoing agency oversight (rent-seeking and agency capture).

Lambert’s analysis should be carefully consulted by American legislators and potential rule-makers (including at the Federal Trade Commission) before they institute digital platform regulation. One also hopes that enlightened foreign competition officials will also take note of Professor Lambert’s well-reasoned study. 

A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.

Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.

Discussion

As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.

Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).

MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”

Nevertheless, MMR rules would be subject to two substantial sources of legal risk.

The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).

An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.

The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.

Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.

Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.

Conclusion

At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.

But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The U.S. House this week passed H.R. 2668, the Consumer Protection and Recovery Act (CPRA), which authorizes the Federal Trade Commission (FTC) to seek monetary relief in federal courts for injunctions brought under Section 13(b) of the Federal Trade Commission Act.

Potential relief under the CPRA is comprehensive. It includes “restitution for losses, rescission or reformation of contracts, refund of money, return of property … and disgorgement of any unjust enrichment that a person, partnership, or corporation obtained as a result of the violation that gives rise to the suit.” What’s more, under the CPRA, monetary relief may be obtained for violations that occurred up to 10 years before the filing of the suit in which relief is requested by the FTC.

The Senate should reject the House version of the CPRA. Its monetary-recovery provisions require substantial narrowing if it is to pass cost-benefit muster.

The CPRA is a response to the Supreme Court’s April 22 decision in AMG Capital Management v. FTC, which held that Section 13(b) of the FTC Act does not authorize the commission to obtain court-ordered equitable monetary relief. As I explained in an April 22 Truth on the Market post, Congress’ response to the court’s holding should not be to grant the FTC carte blanche authority to obtain broad monetary exactions for any and all FTC Act violations. I argued that “[i]f Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices.”

Error cost and difficulties of calculation counsel against pursuing monetary recovery in FTC unfair methods of competition cases. As I explained in my post:

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I [false positives] error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

These error-cost and calculation difficulties became even more pronounced as of July 1. On that date, the FTC unwisely voted 3-2 to withdraw a bipartisan 2015 policy statement providing that the commission would apply consumer welfare and rule-of-reason (weighing efficiencies against anticompetitive harm) considerations in exercising its unfair methods of competition authority (see my commentary here). This means that, going forward, the FTC will arrogate to itself unbounded discretion to decide what competitive practices are “unfair.” Business uncertainty, and the costly risk aversion it engenders, would be expected to grow enormously if the FTC could extract monies from firms due to competitive behavior deemed “unfair,” based on no discernible neutral principle.

Error costs and calculation problems also strongly suggest that monetary relief in FTC consumer-protection matters should be limited to cases of fraud or clear deception. As I noted:

[M]atters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

In short, the Senate should rewrite its Section 13(b) amendments to authorize FTC monetary recoveries only when consumer fraud and dishonesty is shown.

Finally, the Senate would be wise to sharply pare back the House language that allows the FTC to seek monetary exactions based on conduct that is a decade old. Serious problems of making accurate factual determinations of economic effects and specific-damage calculations would arise after such a long period of time. Allowing retroactive determinations based on a shorter “look-back” period prior to the filing of a complaint (three years, perhaps) would appear to strike a better balance in allowing reasonable redress while controlling error costs.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.