Archives For DOJ Antitrust Division

At the Jan. 26 Policy in Transition forum—the Mercatus Center at George Mason University’s second annual antitrust forum—various former and current antitrust practitioners, scholars, judges, and agency officials held forth on the near-term prospects for the neo-Brandeisian experiment undertaken in recent years by both the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ). In conjunction with the forum, Mercatus also released a policy brief on 2022’s significant antitrust developments.

Below, I summarize some of the forum’s noteworthy takeaways, followed by concluding comments on the current state of the antitrust enterprise, as reflected in forum panelists’ remarks.

Takeaways

    1. The consumer welfare standard is neither a recent nor an arbitrary antitrust-enforcement construct, and it should not be abandoned in order to promote a more “enlightened” interventionist antitrust.

George Mason University’s Donald Boudreaux emphasized in his introductory remarks that the standard goes back to Adam Smith, who noted in “The Wealth of Nations” nearly 250 years ago that the appropriate end of production is the consumer’s benefit. Moreover, American Antitrust Institute President Diana Moss, a leading proponent of more aggressive antitrust enforcement, argued in standalone remarks against abandoning the consumer welfare standard, as it is sufficiently flexible to justify a more interventionist agenda.

    1. The purported economic justifications for a far more aggressive antitrust-enforcement policy on mergers remain unconvincing.

Moss’ presentation expressed skepticism about vertical-merger efficiencies and called for more aggressive challenges to such consolidations. But Boudreaux skewered those arguments in a recent four-point rebuttal at Café Hayek. As he explains, Moss’ call for more vertical-merger enforcement ignores the fact that “no one has stronger incentives than do the owners and managers of firms to detect and achieve possible improvements in operating efficiencies – and to avoid inefficiencies.”

Moss’ complaint about chronic underenforcement mistakes by overly cautious agencies also ignores the fact that there will always be mistakes, and there is no reason to believe “that antitrust bureaucrats and courts are in a position to better predict the future [regarding which efficiencies claims will be realized] than are firm owners and managers.” Moreover, Moss provided “no substantive demonstration or evidence that vertical mergers often lead to monopolization of markets – that is, to industry structures and practices that harm consumers. And so even if vertical mergers never generate efficiencies, there is no good argument to use antitrust to police such mergers.”

And finally, Boudreaux considers Moss’ complaint that a court refused to condemn the AT&T-Time Warner merger, arguing that this does not demonstrate that antitrust enforcement is deficient:

[A]s soon as the  . . . merger proved to be inefficient, the parties themselves undid it. This merger was undone by competitive market forces and not by antitrust! (Emphasis in the original.)

    1. The agencies, however, remain adamant in arguing that merger law has been badly unenforced. As such, the new leadership plans to charge ahead and be willing to challenge more mergers based on mere market structure, paying little heed to efficiency arguments or actual showings of likely future competitive harm.

In her afternoon remarks at the forum, Principal Deputy Assistant U.S. Attorney General for Antitrust Doha Mekki highlighted five major planks of Biden administration merger enforcement going forward.

  • Clayton Act Section 7 is an incipiency statute. Thus, “[w]hen a [mere] change in market structure suggests that a firm will have an incentive to reduce competition, that should be enough [to justify a challenge].”
  • “Once we see that a merger may lead to, or increase, a firm’s market power, only in very rare circumstances should we think that a firm will not exercise that power.”
  • A structural presumption “also helps businesses conform their conduct to the law with more confidence about how the agencies will view a proposed merger or conduct.”
  • Efficiencies defenses will be given short shrift, and perhaps ignored altogether. This is because “[t]he Clayton Act does not ask whether a merger creates a more or less efficient firm—it asks about the effect of the merger on competition. The Supreme Court has never recognized efficiencies as a defense to an otherwise illegal merger.”
  • Merger settlements have often failed to preserve competition, and they will be highly disfavored. Therefore, expect a lot more court challenges to mergers than in recent decades. In short, “[w]e must be willing to litigate. . . . [W]e need to acknowledge the possibility that sometimes a court might not agree with us—and yet go to court anyway.”

Mekki’s comments suggest to me that the soon-to-be-released new draft merger guidelines may emphasize structural market-share tests, generally reject efficiencies justifications, and eschew the economic subtleties found in the current guidelines.

    1. The agencies—and the FTC, in particular—have serious institutional problems that undermine their effectiveness, and risk a loss of credibility before the courts in the near future.

In his address to the forum, former FTC Chairman Bill Kovacic lamented the inefficient limitations on reasoned FTC deliberations imposed by the Sunshine Act, which chills informal communications among commissioners. He also pointed to our peculiarly unique global status of having two enforcers with duplicative antitrust authority, and lamented the lack of policy coherence, which reflects imperfect coordination between the agencies.

Perhaps most importantly, Kovacic raised the specter of the FTC losing credibility in a possible world where Humphrey’s Executor is overturned (see here) and the commission is granted little judicial deference. He suggested taking lessons on policy planning and formulation from foreign enforcers—the United Kingdom’s Competition and Markets Authority, in particular. He also decried agency officials’ decisions to belittle prior administrations’ enforcement efforts, seeing it as detracting from the international credibility of U.S. enforcement.

    1. The FTC is embarking on a novel interventionist path at odds with decades of enforcement policy.

In luncheon remarks, Commissioner Christine S. Wilson lamented the lack of collegiality and consultation within the FTC. She warned that far-reaching rulemakings and other new interventionist initiatives may yield a backlash that undermines the institution.

Following her presentation, a panel of FTC experts discussed several aspects of the commission’s “new interventionism.” According to one panelist, the FTC’s new Section 5 Policy Statement on Unfair Methods of Competition (which ties “unfairness” to arbitrary and subjective terms) “will not survive in” (presumably, will be given no judicial deference by) the courts. Another panelist bemoaned rule-of-law problems arising from FTC actions, called for consistency in FTC and DOJ enforcement policies, and warned that the new merger guidelines will represent a “paradigm shift” that generates more business uncertainty.

The panel expressed doubts about the legal prospects for a proposed FTC rule on noncompete agreements, and noted that constitutional challenges to the agency’s authority may engender additional difficulties for the commission.

    1. The DOJ is greatly expanding its willingness to litigate, and is taking actions that may undermine its credibility in court.

Assistant U.S. Attorney General for Antitrust Jonathan Kanter has signaled a disinclination to settle, as well as an eagerness to litigate large numbers of cases (toward that end, he has hired a huge number of litigators). One panelist noted that, given this posture from the DOJ, there is a risk that judges may come to believe that the department’s litigation decisions are not well-grounded in the law and the facts. The business community may also have a reduced willingness to “buy in” to DOJ guidance.

Panelists also expressed doubts about the wisdom of DOJ bringing more “criminal Sherman Act Section 2” cases. The Sherman Act is a criminal statute, but the “beyond a reasonable doubt” standard of criminal law and Due Process concerns may arise. Panelists also warned that, if new merger guidelines are ”unsound,” they may detract from the DOJ’s credibility in federal court.

    1. International antitrust developments have introduced costly new ex ante competition-regulation and enforcement-coordination problems.

As one panelist explained, the European Union’s implementation of the new Digital Markets Act (DMA) will harmfully undermine market forces. The DMA is a form of ex ante regulation—primarily applicable to large U.S. digital platforms—that will harmfully interject bureaucrats into network planning and design. The DMA will lead to inefficiencies, market fragmentation, and harm to consumers, and will inevitably have spillover effects outside Europe.

Even worse, the DMA will not displace the application of EU antitrust law, but merely add to its burdens. Regrettably, the DMA’s ex ante approach is being imitated by many other enforcement regimes, and the U.S. government tacitly supports it. The DMA has not been included in the U.S.-EU joint competition dialogue, which risks failure. Canada and the U.K. should also be added to the dialogue.

Other International Concerns

The international panelists also noted that there is an unfortunate lack of convergence on antitrust procedures. Furthermore, different jurisdictions manifest substantial inconsistencies in their approaches to multinational merger analysis, where better coordination is needed. There is a special problem in the areas of merger review and of criminal leniency for price fixers: when multiple jurisdictions need to “sign off” on an enforcement matter, the “most restrictive” jurisdiction has an effective veto.

Finally, former Assistant U.S. Attorney General for Antitrust James Rill—perhaps the most influential promoter of the adoption of sound antitrust laws worldwide—closed the international panel with a call for enhanced transnational cooperation. He highlighted the importance of global convergence on sound antitrust procedures, emphasizing due process. He also advocated bolstering International Competition Network (ICN) and OECD Competition Committee convergence initiatives, and explained that greater transparency in agency-enforcement actions is warranted. In that regard, Rill said, ICN nongovernmental advisers should be given a greater role.

Conclusion

Taken as a whole, the forum’s various presentations painted a rather gloomy picture of the short-term prospects for sound, empirically based, economics-centric antitrust enforcement.

In the United States, the enforcement agencies are committed to far more aggressive antitrust enforcement, particularly with respect to mergers. The agencies’ new approach downplays efficiencies and they will be quick to presume broad categories of business conduct are anticompetitive, relying far less closely on case-specific economic analysis.

The outlook is also bad overseas, as European Union enforcers are poised to implement new ex ante regulation of competition by large platforms as an addition to—not a substitute for—established burdensome antitrust enforcement. Most foreign jurisdictions appear to be following the European lead, and the U.S. agencies are doing nothing to discourage them. Indeed, they appear to fully support the European approach.

The consumer welfare standard, which until recently was the stated touchstone of American antitrust enforcement—and was given at least lip service in Europe—has more or less been set aside. The one saving grace in the United States is that the federal courts may put a halt to the agencies’ overweening ambitions, but that will take years. In the meantime, consumer welfare will suffer and welfare-enhancing business conduct will be disincentivized. The EU courts also may place a minor brake on European antitrust expansionism, but that is less certain.

Recall, however, that when evils flew out of Pandora’s box, hope remained. Let us hope, then, that the proverbial worm will turn, and that new leadership—inspired by hopeful and enlightened policy advocates—will restore principled antitrust grounded in the promotion of consumer welfare.

The Federal Trade Commission’s (FTC) Jan. 5 “Notice of Proposed Rulemaking on Non-Compete Clauses” (NPRMNCC) is the first substantive FTC Act Section 6(g) “unfair methods of competition” rulemaking initiative following the release of the FTC’s November 2022 Section 5 Unfair Methods of Competition Policy Statement. Any final rule based on the NPRMNCC stands virtually no chance of survival before the courts. What’s more, this FTC initiative also threatens to have a major negative economic-policy impact. It also poses an institutional threat to the Commission itself. Accordingly, the NPRMNCC should be withdrawn, or as a “second worst” option, substantially pared back and recast.

The NPRMNCC is succinctly described, and its legal risks ably summarized, in a recent commentary by Gibson Dunn attorneys: The proposal is sweeping in its scope. The NPRMNCC states that it “would, among other things, provide that it is an unfair method of competition for an employer to enter into or attempt to enter into a non-compete clause with a worker; to maintain with a worker a non-compete clause; or, under certain circumstances, to represent to a worker that the worker is subject to a non-compete clause.”

The Gibson Dunn commentary adds that it “would require employers to rescind all existing non-compete provisions within 180 days of publication of the final rule, and to provide current and former employees notice of the rescission.‎ If employers comply with these two requirements, the rule would provide a safe harbor from enforcement.”‎

As I have explained previously, any FTC Section 6(g) rulemaking is likely to fail as a matter of law. Specifically, the structure of the FTC Act indicates that Section 6(g) is best understood as authorizing procedural regulations, not substantive rules. What’s more, Section 6(g) rules raise serious questions under the U.S. Supreme Court’s nondelegation and major questions doctrines (given the breadth and ill-defined nature of “unfair methods of competition”) and under administrative law (very broad unfair methods of competition rules may be deemed “arbitrary and capricious” and raise due process concerns). The cumulative weight of these legal concerns “makes it highly improbable that substantive UMC rules will ultimately be upheld.

The legal concerns raised by Section 6(g) rulemaking are particularly acute in the case of the NPRMNCC, which is exceedingly broad and deals with a topic—employment-related noncompete clauses—with which the FTC has almost no experience. FTC Commissioner Christine Wilson highlights this legal vulnerability in her dissenting statement opposing issuance of the NPRMNCC.

As Andrew Mercado and I explained in our commentary on potential FTC noncompete rulemaking: “[a] review of studies conducted in the past two decades yields no uniform, replicable results as to whether such agreements benefit or harm workers.” In a comprehensive literature review made available online at the end of 2019, FTC economist John McAdams concluded that “[t]here is little evidence on the likely effects of broad prohibitions of non-compete agreements.” McAdams also commented on the lack of knowledge regarding the effects that noncompetes may have on ultimate consumers. Given these realities, the FTC would be particularly vulnerable to having a court hold that a final noncompete rule (even assuming that it somehow surmounted other legal obstacles) lacked an adequate factual basis, and thus was arbitrary and capricious.

The poor legal case for proceeding with the NPRMNCC is rendered even weaker by the existence of robust state-law provisions concerning noncompetes in almost every state (see here for a chart comparing state laws). Differences in state jurisprudence may enable “natural experimentation,” whereby changes made to state law that differ across jurisdictions facilitate comparisons of the effects of different approaches to noncompetes. Furthermore, changes to noncompete laws in particular states that are seen to cause harm, or generate benefits, may allow “best practices” to emerge and thereby drive welfare-enhancing reforms in multiple jurisdictions.

The Gibson Dunn commentary points out that, “[a]s a practical matter, the proposed [FTC noncompete] rule would override existing non-compete requirements and practices in the vast majority of states.” Unfortunately, then, the NPRMNCC would largely do away with the potential benefits of competitive federalism in the area of noncompetes. In light of that, federal courts might well ask whether Congress meant to give the FTC preemptive authority over a legal field traditionally left to the states, merely by making a passing reference to “mak[ing] rules and regulations” in Section 6(g) of the FTC Act. Federal judges would likely conclude that the answer to this question is “no.”

Economic Policy Harms

How much economic harm could an FTC rule on noncompetes cause, if the courts almost certainly would strike it down? Plenty.

The affront to competitive federalism, which would prevent optimal noncompete legal regimes from developing (see above), could reduce the efficiency of employment contracts and harm consumer welfare. It would be exceedingly difficult (if not impossible) to measure such harms, however, because there would be no alternative “but-for” worlds with differing rules that could be studied.

The broad ban on noncompetes predictably will prevent—or at least chill—the use of noncompete clauses to protect business-property interests (including trade secrets and other intellectual-property rights) and to protect value-enhancing investments in worker training. (See here for a 2016 U.S. Treasury Department Office of Economic Policy Report that lists some of the potential benefits of noncompetes.) The NPRMNCC fails to account for those and other efficiencies, which may be key to value-generating business-process improvements that help drive dynamic economic growth. Once again, however, it would be difficult to demonstrate the nature or extent of such foregone benefits, in the absence of “but-for” world comparisons.

Business-litigation costs would also inevitably arise, as uncertainties in the language of a final noncompete rule were worked out in court (prior to the rule’s legal demise). The opportunity cost of firm resources directed toward rule-related issues, rather than to business-improvement activities, could be substantial. The opportunity cost of directing FTC resources to wasteful noncompete-related rulemaking work, rather than potential welfare-enhancing endeavors (such as anti-fraud enforcement activity), also should not be neglected.

Finally, the substantial error costs that would attend designing and seeking to enforce a final FTC noncompete rule, and the affront to the rule of law that would result from creating a substantial new gap between FTC and U.S. Justice Department competition-enforcement regimes, merits note (see here for my discussion of these costs in the general context of UMC rulemaking).

Conclusion

What, then, should the FTC do? It should withdraw the NPRMNCC.

If the FTC is concerned about the effects of noncompete clauses, it should commission appropriate economic research, and perhaps conduct targeted FTC Act Section 6(b) studies directed at noncompetes (focused on industries where noncompetes are common or ubiquitous). In light of that research, it might be in position to address legal policy toward noncompetes in competition advocacy before the states, or in testimony before Congress.

If the FTC still wishes to engage in some rulemaking directed at noncompete clauses, it should consider a targeted FTC Act Section 18 consumer-protection rulemaking (see my discussion of this possibility, here). Unlike Section 6(g), the legality of Section 18 substantive rulemaking (which is directed at “unfair or deceptive acts or practices”) is well-established. Categorizing noncompete-clause-related practices as “deceptive” is plainly a nonstarter, so the Commission would have to bases its rulemaking on defining and condemning specified “unfair acts or practices.”

Section 5(n) of the FTC Act specifies that the Commission may not declare an act or practice to be unfair unless it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” This is a cost-benefit test that plainly does not justify a general ban on noncompetes, based on the previous discussion. It probably could, however, justify a properly crafted narrower rule, such as a requirement that an employer notify its employees of a noncompete agreement before they accept a job offer (see my analysis here).  

Should the FTC nonetheless charge forward and release a final competition rule based on the NPRMNCC, it will face serious negative institutional consequences. In the previous Congress, Sens. Mike Lee (R-Utah) and Chuck Grassley (R-Iowa) have introduced legislation that would strip the FTC of its antitrust authority (leaving all federal antitrust enforcement in DOJ hands). Such legislation could gain traction if the FTC were perceived as engaging in massive institutional overreach. An unprecedented Commission effort to regulate one aspect of labor contracts (noncompete clauses) nationwide surely could be viewed by Congress as a prime example of such overreach. The FTC should keep that in mind if it values maintaining its longstanding role in American antitrust-policy development and enforcement.

Research still matters, so I recommend video from the Federal Trade Commission’s 15th Annual Microeconomics Conference, if you’ve not already seen it. It’s a valuable event, and it’s part of the FTC’s still important statutory-research mission. It also reminds me that the FTC’s excellent, if somewhat diminished, Bureau of Economics still has no director; Marta Woskinska concluded her very short tenure in February. Eight-plus months of hiring and appointments (and many departures) later, she’s not been replaced. Priorities.

The UMC Watch Continues: In 2015, the FTC issued a Statement of Enforcement Principles Regarding “Unfair Methods of Competition.” On July 1, 2021, the Commission withdrew the statement on a 3-2 vote, sternly rebuking its predecessors: “the 2015 Statement …abrogates the Commission’s congressionally mandated duty to use its expertise to identify and combat unfair methods of competition even if they do not violate a separate antitrust statute.”

That was surprising. First, it actually presaged a downturn in enforcement. Second, while the 2015 statement was not empty, many agreed with Commissioner Maureen Ohlhausen’s 2015 dissent that it offered relatively little new guidance on UMC enforcement. In other words, stating that conduct “will be evaluated under a framework similar to the rule of reason” seemed not much of a limiting principle to some, if far too much of one to others. Eye of the beholder. 

Third, as Commissioners Noah Phillips and Christine S. Wilson noted in their dissent, given that there was no replacement, it was “[h]inting at the prospect of dramatic new liability without any guide regarding what the law permits or proscribes.” The business and antitrust communities were put on watch: winter is coming. Winter is still coming. In September, Chair Lina Khan stated that one of her top priorities “has been the preparation of a policy statement on Section 5 that reflects the statutory text, our institutional structure, the history of the statute, and the case law.” Indeed. More recently, she said she was hopeful that the statement would be released in “the coming weeks.”  Stay tuned. 

There was September success, and a little mission creep at the DOJ Antitrust Division: Congrats to the U.S. Justice Department for some uncharacteristic success, and not a little creativity. In U.S. v. Nathan Nephi Zito, the defendant pleaded guilty to illegal monopolization for proposing that he and a competitor allocate markets for highway-crack-sealing services.  

The odd part, and an FTC connection that was noted by Pallavi Guniganti and Gus Hurwitz: at issue was a single charge of monopolization in violation of Section 2 of the Sherman Act. There’s long been widespread agreement that the bounds of Section 5 UMC authority exceed those of the Sherman Act, along with widespread disagreement on the extent to which that’s true, but there was consensus on invitations to collude. Agreements to fix prices or allocate markets are per se violations of Section 1. Refused invitations to collude are not, or were not. But as the FTC stated in its now-withdrawn Statement of Enforcement Principles, UMC authority extends to conduct “that, if allowed to mature or complete, could violate the Sherman or Clayton Act.” But the FTC didn’t bring the case against Zito, the competitor rejected the invitation, and nobody alleged a violation of either Sherman Section 1 or FTC Section 5. 

The admitted conduct seems indefensible, under Section 5, so perhaps there’s no harm ex post, but I wonder where this is going.     

DOJ also had a Halloween win when Judge Florence Y. Pan of the U.S. Court of Appeals for the District of Columbia, sitting by designation in the U.S. District Court for the District of Columbia, issued an order blocking the proposed merger of Penguin Random House and Simon & Schuster. The opinion is still sealed. But based on the complaint, it was a relatively straightforward monopsony case, albeit one with a very narrow market definition: two market definitions, but with most of the complaint and the more convincing story about “the market for acquisition of publishing rights to anticipated top-selling books.” Steven King, Oprah Winfrey, etc. 

Maybe they got it right, although Assistant Attorney General Jonathan Kanter’s description seems a bit of puffery, if not a mountain of it: “The proposed merger would have reduced competition, decreased author compensation, diminished the breadth, depth, and diversity of our stories and ideas, and ultimately impoverished our democracy.”

At the margin? The Division did not need to prove harm to consumers downstream, although it alleged such harm. Here’s a policy question: suppose the deal would have lowered advances paid to top-selling authors—those cited in the complaint are mostly in the millions of dollars—but suppose DOJ was wrong about the larger market and downstream effects. If publisher savings were accompanied by a slight reduction in book prices, not output, would that have been a bad result?    

And you thought entry was procompetitive? For some, Halloween fright does not abate with daylight. On Nov. 1, Sen. Elizabeth Warren (D-Mass.) sent a letter to Lina Khan and Jonathan Kanter, writing “with serious concern about emerging competition and consumer protection issues that Big Tech’s expansion into the automotive industry poses.” I gather that “emerging” is a term of art in legal French meaning “possible, maybe.” The senator writes with great imagination and not a little drama, cataloging numerous allegations about such worrisome conduct as bundling.

Of course, some tying arrangements are anticompetitive, but bundling is not necessarily or even typically anticompetitive. As an article still posted on the DOJ website explains, the “pervasiveness of tying in the economy shows that it is generally beneficial,” For instance, in the automotive industry, most consumers seem to prefer buying their cars whole rather than in parts.

It’s impossible to know that none of Warren’s myriad purported harms will come to pass in any market, but nobody has argued that the agencies ought to stop screening Hart-Scott-Rodino submissions. The need to act “quickly and decisively” on so many issues seems dubious. Perhaps there might be advantages to having technically sophisticated, data-rich, well-financed firms enter into product R&D and competition in new areas, including nascent product markets that might want more of such things for the technology that goes into vehicles that hurtle us down the highway.        

The Oct. 21 Roundup highlighted the FTC’s recent flood of regulatory proposals, including the “commercial surveillance” ANPR. Three new ANPRs were mentioned that week: one regarding “Junk Fees,” one regarding “Fake Reviews and Endorsements,” and one regarding potential updates to the FTC’s “Funeral Rule.” Periodic rule review is a requirement, so a potential update is not unusual. On the others, I recommend Commissioner Wilson’s dissents for an overview of legitimate concerns. In sum, the junk-ees ANPR is “sweeping in its breadth; may duplicate, or contradict, existing laws and rules; is untethered from a solid foundation of FTC enforcement; relies on flawed assumptions and vague definitions; ignores impacts on competition; and diverts scarce agency resources from important law enforcement efforts.” And if some “junk fees” are the result of deceptive or unfair practices under established standards, the ANPR also seems to refer to potentially useful and efficient unbundling. Wilson finds the “fake reviews and endorsements” ANPR clearer and better focused, but another bridge too far, contemplating a burdensome regulatory scheme while active enforcement and guidance initiatives are underway, and may adequately address material and deceptive advertising practices.

As Wilson notes, the costs of regulating are substantial, too. New proposals spring forth while overdue projects founder. For instance, the long, long overdue “10-year” review of the FTC’s Eyeglass Rule last saw an ANPR in 2015, following a 2004 decision to leave an earlier version of the rule in place. The Contact Lens Rule, implementing the Fairness to Contact Lens Consumers Act, was initially adopted in 2004 and amended 16 years later, partly because the central provision of the rule had proved unenforceable, resulting in chronic noncomplianceThe chair is also considering rulemaking on noncompete clauses. Again, there are worries that some anticompetitive conduct might prompt considerably overbroad regulation, given legitimate applications, a developing and mixed body of empirical literature, and recent activity in the states. It’s another area to wonder whether the FTC has either congressional authorization or the resources, experience, and expertise to regulate the conduct at issue–potentially, every employment agreement in the United States.

A White House administration typically announces major new antitrust initiatives in the fall and spring, and this year is no exception. Senior Biden administration officials kicked off the fall season at Fordham Law School (more on that below) by shedding additional light on their plans to expand the accepted scope of antitrust enforcement.

Their aggressive enforcement statements draw headlines, but will the administration’s neo-Brandeisians actually notch enforcement successes? The prospects are cloudy, to say the least.

The U.S. Justice Department (DOJ) has lost some cartel cases in court this year (what was the last time that happened?) and, on Sept. 19, a federal judge rejected the DOJ’s attempt to enjoin United Health’s $13.8 billion bid for Change Healthcare. The Federal Trade Commission (FTC) recently lost two merger challenges before its in-house administrative law judge. It now faces a challenge to its administrative-enforcement processes before the U.S. Supreme Court (the Axon case, to be argued in November).

(Incidentally, on the other side of the Atlantic, the European Commission has faced some obstacles itself. Despite its recent Google victory, the Commission has effectively lost two abuse of dominance cases this year—the Intel and Qualcomm matters—before the European General Court.)

So, are the U.S. antitrust agencies chastened? Will they now go back to basics? Far from it. They enthusiastically are announcing plans to charge ahead, asserting theories of antitrust violations that have not been taken seriously for decades, if ever. Whether this turns out to be wise enforcement policy remains to be seen, but color me highly skeptical. Let’s take a quick look at some of the big enforcement-policy ideas that are being floated.

Fordham Law’s Antitrust Conference

Admiral David Farragut’s order “Damn the torpedoes, full speed ahead!” was key to the Union Navy’s August 1864 victory in the Battle of Mobile Bay, a decisive Civil War clash. Perhaps inspired by this display of risk-taking, the heads of the two federal antitrust agencies—DOJ Assistant Attorney General (AAG) Jonathan Kanter and FTC Chair Lina Khan—took a “damn the economics, full speed ahead” attitude in remarks at the Sept. 16 session of Fordham Law School’s 49th Annual Conference on International Antitrust Law and Policy. Special Assistant to the President Tim Wu was also on hand and emphasized the “all of government” approach to competition policy adopted by the Biden administration.

In his remarks, AAG Kanter seemed to be endorsing a “monopoly broth” argument in decrying the current “Whac-a-Mole” approach to monopolization cases. The intent may be to lessen the burden of proof of anticompetitive effects, or to bring together a string of actions taken jointly as evidence of a Section 2 violation. In taking such an approach, however, there is a serious risk that efficiency-seeking actions may be mistaken for exclusionary tactics and incorrectly included in the broth. (Notably, the U.S. Court of Appeals for the D.C. Circuit’s 2001 Microsoft opinion avoided the monopoly-broth problem by separately discussing specific company actions and weighing them on their individual merits, not as part of a general course of conduct.)

Kanter also recommended going beyond “our horizontal and vertical framework” in merger assessments, despite the fact that vertical mergers (involving complements) are far less likely to be anticompetitive than horizontal mergers (involving substitutes).

Finally, and perhaps most problematically, Kanter endorsed the American Innovative and Choice Online Act (AICOA), citing the protection it would afford “would-be competitors” (but what about consumers?). In so doing, the AAG ignored the fact that AICOA would prohibit welfare-enhancing business conduct and could be harmfully construed to ban mere harm to rivals (see, for example, Stanford professor Doug Melamed’s trenchant critique).

Chair Khan’s presentation, which called for a far-reaching “course correction” in U.S. antitrust, was even more bold and alarming. She announced plans for a new FTC Act Section 5 “unfair methods of competition” (UMC) policy statement centered on bringing “standalone” cases not reachable under the antitrust laws. Such cases would not consider any potential efficiencies and would not be subject to the rule of reason. Endorsing that approach amounts to an admission that economic analysis will not play a serious role in future FTC UMC assessments (a posture that likely will cause FTC filings to be viewed skeptically by federal judges).

In noting the imminent release of new joint DOJ-FTC merger guidelines, Khan implied that they would be animated by an anti-merger philosophy. She cited “[l]awmakers’ skepticism of mergers” and congressional rejection “of economic debits and credits” in merger law. Khan thus asserted that prior agency merger guidance had departed from the law. I doubt, however, that many courts will be swayed by this “economics free” anti-merger revisionism.

Tim Wu’s remarks closing the Fordham conference had a “big picture” orientation. In an interview with GW Law’s Bill Kovacic, Wu briefly described the Biden administration’s “whole of government” approach, embodied in President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy. While the order’s notion of breaking down existing barriers to competition across the American economy is eminently sound, many of those barriers are caused by government restrictions (not business practices) that are not even alluded to in the order.

Moreover, in many respects, the order seeks to reregulate industries, misdiagnosing many phenomena as business abuses that actually represent efficient free-market practices (as explained by Howard Beales and Mark Jamison in a Sept. 12 Mercatus Center webinar that I moderated). In reality, the order may prove to be on net harmful, rather than beneficial, to competition.

Conclusion

What is one to make of the enforcement officials’ bold interventionist screeds? What seems to be missing in their presentations is a dose of humility and pragmatism, as well as appreciation for consumer welfare (scarcely mentioned in the agency heads’ presentations). It is beyond strange to see agencies that are having problems winning cases under conventional legal theories floating novel far-reaching initiatives that lack a sound economics foundation.

It is also amazing to observe the downplaying of consumer welfare by agency heads, given that, since 1979 (in Reiter v. Sonotone), the U.S. Supreme Court has described antitrust as a “consumer welfare prescription.” Unless there is fundamental change in the makeup of the federal judiciary (and, in particular, the Supreme Court) in the very near future, the new unconventional theories are likely to fail—and fail badly—when tested in court. 

Bringing new sorts of cases to test enforcement boundaries is, of course, an entirely defensible role for U.S. antitrust leadership. But can the same thing be said for bringing “non-boundary” cases based on theories that would have been deemed far beyond the pale by both Republican and Democratic officials just a few years ago? Buckle up: it looks as if we are going to find out. 

A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)

The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.

And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.

Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.

This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.

The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:

What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?

This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.

Do Monopoly Profits Always Exceed Joint Duopoly Profits?

Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.

The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:

I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.

Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:

The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.

Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?

I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.

First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.

For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.

Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either. 

Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.

For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.

In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.

If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.

If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.

The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:

Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.

If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).

One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Geoffrey Manne, Sam Bowman, and Dirk Auer have argued:

Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.

Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.

Many Reasons for Mergers

But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.

If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.

Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law. 

Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.

An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.

In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.

[On Monday, June 27, Concurrences hosted a conference on the Rulemaking Authority of the Federal Trade Commission. This conference featured the work of contributors to a new book on the subject edited by Professor Dan Crane. Several of these authors have previously contributed to the Truth on the Market FTC UMC Symposium. We are pleased to be able to share with you excerpts or condensed versions of chapters from this book prepared by authors of of those chapters. Our thanks and compliments to Dan and Concurrences for bringing together an outstanding event and set of contributors and for supporting our sharing them with you here.]

[The post below was authored by former Federal Trade Commission Acting Chair Maureen K. Ohlhausen and former Assistant U.S. Attorney General James F. Rill.]

Since its founding in 1914, the Federal Trade Commission (FTC) has held a unique and multifaceted role in the U.S. administrative state and the economy. It possesses powerful investigative and information-gathering powers, including through compulsory processes; a multi-layered administrative-adjudication process to prosecute “unfair methods of competition (UMC)” (and later, “unfair and deceptive acts and practices (UDAP),” as well); and an important role in educating and informing the business community and the public. What the FTC cannot be, however, is a legislature with broad authority to expand, contract, or alter the laws that Congress has tasked it with enforcing.

Recent proposals for aggressive UMC rulemaking, predicated on Section 6(g) of the FTC Act, would have the effect of claiming just this sort of quasi-legislative power for the commission based on a thin statutory reed authorizing “rules and regulations for the purpose of carrying out the provisions of” that act. This usurpation of power would distract the agency from its core mission of case-by-case expert application of the FTC Act through administrative adjudication. It would also be inconsistent with the explicit grants of rulemaking authority that Congress has given the FTC and run afoul of the congressional and constitutional “guard rails” that cabin the commission’s authority.

FTC’s Unique Role as an Administrative Adjudicator

The FTC’s Part III adjudication authority is central to its mission of preserving fair competition in the U.S. economy. The FTC has enjoyed considerable success in recent years with its administrative adjudications, both in terms of winning on appeal and in shaping the development of antitrust law overall (not simply a separate category of UMC law) by creating citable precedent in key areas. However, as a result of its July 1, 2021, open meeting and President Joe Biden’s “Promoting Competition in the American Economy” executive order, the FTC appears to be headed for another misadventure in response to calls to claim authority for broad, legislative-style “unfair methods of competition” rulemaking out of Section 6(g) of the FTC Act. The commission recently took a significant and misguided step toward this goal by rescinding—without replacing—its bipartisan Statement of Enforcement Principles Regarding “Unfair Methods of Competition” Under Section 5 of the FTC Act, divorcing (at least in the commission majority’s view) Section 5 from prevailing antitrust-law principles and leaving the business community without any current guidance as to what the commission considers “unfair.”

FTC’s Rulemaking Authority Was Meant to Complement its Case-by-Case Adjudicatory Authority, Not Supplant It

As described below, broad rulemaking of this sort would likely encounter stiff resistance in the courts, due to its tenuous statutory basis and the myriad constitutional and institutional problems it creates. But even aside from the issue of legality, such a move would distract the FTC from its fundamental function as an expert case-by-case adjudicator of competition issues. It would be far too tempting for the commission to simply regulate its way to the desired outcome, bypassing all neutral arbiters along the way. And by seeking to promulgate such rules through abbreviated notice-and-comment rulemaking, the FTC would be claiming extremely broad substantive authority to directly regulate business conduct across the economy with relatively few of the procedural protections that Congress felt necessary for the FTC’s trade-regulation rules in the consumer-protection context. This approach risks not only a diversion of scarce agency resources from meaningful adjudication opportunities, but also potentially a loss of public legitimacy for the commission should it try to exempt itself from these important rulemaking safeguards.

FTC Lacks Authority to Promulgate Legislative-Style Competition Rules

The FTC has historically been hesitant to exercise UMC rulemaking authority under Section 6(g) of the FTC Act, which simply states that FTC shall have power “[f]rom time to time to classify corporations and … to make rules and regulations for the purpose of carrying out the provisions” of the FTC Act. Current proponents of UMC rulemaking argue for a broad interpretation of this clause, allowing for legally binding rulemaking on any issue subject to the FTC’s jurisdiction. But the FTC’s past reticence to exercise such sweeping powers is likely due to the existence of significant and unresolved questions of the FTC’s UMC rulemaking authority from both a statutory and constitutional perspective.

Absence of Statutory Authority

The FTC’s authority to conduct rulemaking under Section 6(g) has been tested in court only once, in National Petroleum Refiners Association v. FTC. In that case, the FTC succeeded in classifying the failure to post octane ratings on gasoline pumps as “an unfair method of competition.” The U.S. Court of Appeals for the D.C. Circuit found that Section 6(g) did confer this rulemaking authority. But Congress responded two years later with the Magnuson-Moss Warranty-Federal Trade Commission Improvement Act of 1975, which created a new rulemaking scheme that applied exclusively to the FTC’s consumer-protection rules. This act expressly excluded rulemaking on unfair methods of competition from its authority. The statute’s provision that UMC rulemaking is unaffected by the legislation manifests strong congressional design that such rules would be governed not by Magnuson-Moss, but by the FTC Act itself. The reference in Magnuson-Moss to the statute not affecting “any authority” of the FTC to engage in UMC rulemaking—as opposed to “the authority”— reflects Congress’ agnostic view on whether the FTC possessed any such authority. It simply means that whatever authority exists for UMC rulemaking, the Magnuson-Moss provisions do not affect it, and Congress left the question open for the courts to resolve.

Proponents of UMC rulemaking argue that Magnuson-Moss left the FTC’s competition-rulemaking authority intact and entitled to Chevron deference. But, as has been pointed out by many commentators over the decades, that would be highly incongruous, given that National Petroleum Refiners dealt with both UMC and UDAP authority under Section 6(g), yet Congress’ reaction was to provide specific UDAP rulemaking authority and expressly take no position on UMC rulemaking. As further evidenced by the fact that the FTC has never attempted to promulgate a UMC rule in the years following enactment of Magnuson-Moss, the act is best read as declining to endorse the FTC’s UMC rulemaking authority. Instead, it leaves the question open for future consideration by the courts.

Turning to the terms of the FTC Act, modern statutory interpretation takes a far different approach than the court in National Petroleum Refiners, which discounted the significance of Section 5’s enumeration of adjudication as the means for restraining UMC and UDAP, reasoning that Section 5(b) did not use limiting language and that Section 6(g) provides a source of substantive rulemaking authority. This approach is in clear tension with the elephants-in-mouseholes doctrine developed by the Supreme Court in recent years. The FTC’s recent claim of broad substantive UMC rulemaking authority based on the absence of limiting language and a vague, ancillary provision authorizing rulemaking alongside the ability to “classify corporations” stands in conflict with the Court’s admonition in Whitman v. American Trucking Association. The Court in AMG Capital Management, LLC v. FTC recently applied similar principles in the context of the FTC’s authority under the FTC Act. Here,the Court emphasized “the historical importance of administrative proceedings” and declined to give the FTC a shortcut to desirable outcomes in federal court. Similarly, granting broad UMC-rulemaking authority to the FTC would permit it to circumvent the FTC Act’s defining feature of case-by-case adjudications. Applying the principles enunciated in Whitman and AMG, Section 5 is best read as specifying the sole means of UMC enforcement (adjudication), and Section 6(g) is best understood as permitting the FTC to specify how it will carry out its adjudicative, investigative, and informative functions. Thus, Section 6(g) grants ministerial, not legislative, rulemaking authority.

Notably, this reading of the FTC Act would accord with how the FTC viewed its authority until 1962, a fact that the D.C. Circuit found insignificant, but that later doctrine would weigh heavily. Courts should consider an agency’s “past approach” toward its interpretation of a statute, and an agency’s longstanding view that it lacks the authority to take a certain action is a “rather telling” clue that the agency’s newfound claim to such authority is incorrect. Conversely, even widespread judicial acceptance of an interpretation of an agency’s authority does not necessarily mean the construction of the statute is correct. In AMG, the Court gave little weight to the FTC’s argument that appellate courts “have, until recently, consistently accepted its interpretation.” It also rejected the FTC’s argument that “Congress has in effect twice ratified that interpretation in subsequent amendments to the Act.” Because the amendments did not address the scope of Section 13(b), they did not convince the Court in AMG that Congress had acquiesced in the lower courts’ interpretation.

The court in National Petroleum Refiners also lauded the benefits of rulemaking authority and emphasized that the ability to promulgate rules would allow the FTC to carry out the purpose of the act. But the Supreme Court has emphasized that “however sensible (or not)” an interpretation may be, “a reviewing court’s task is to apply the text of the statute, not to improve upon it.” Whatever benefits UMC-rulemaking authority may confer on the FTC, they cannot justify departure from the text of the FTC Act.

In sum, even Chevron requires the agency to rely on a “permissible construction” of the statute, and it is doubtful that the current Supreme Court would see a broad assertion of substantive antitrust rulemaking as “permissible” under the vague language of Section 6(g).

Constitutional Vulnerabilities

The shaky foundation supporting the FTC’s claimed authority for UMC rulemaking is belied by both the potential breadth of such rules and the lack of clear guidance in Section 6(g) itself. The presence of either of these factors increases the likelihood that any rule promulgated under Section 6 runs afoul of the constitutional nondelegation doctrine.

The nondelegation doctrine requires Congress to provide “an intelligible principle” to assist the agency to which it has delegated legislative discretion. Although long considered moribund, the doctrine was recently addressed by the U.S. Supreme Court in Gundy v. United States, which underscored the current relevance of limitations on Congress’ ability to transfer unfettered legislative-like powers to federal agencies. Although the statute in that case was ruled permissible by a plurality of justices, most of the Court’s current members have expressed concerns that the Court has long been too quick to reject nondelegation arguments, arguing for stricter controls in this area. In a concurrence, Justice Samuel Alito lamented that the Court has “uniformly rejected nondelegation arguments and has upheld provisions that authorized agencies to adopt important rules pursuant to extraordinarily capacious standards,” while Justices Neil Gorsuch and Clarence Thomas and Chief Justice John Roberts dissented, decrying the “unbounded policy choices” Congress had bestowed, stating that it “is delegation running riot” to “hand off to the nation’s chief prosecutor the power to write his own criminal code.”

The Gundy dissent cited to A.L.A. Schechter Poultry Corp. v. United States, where the Supreme Court struck down Congress’ delegation of authority based on language very similar to Section 5 of the FTC Act. Schechter Poultry examined whether the authority that Congress granted to the president under the National Industrial Recovery Act (NIRA) violated the nondelegation clause. The offending NIRA provision gave the president authority to approve “codes of fair competition,” which comes uncomfortably close to the FTC Act’s “unfair methods of competition” grant of authority. Notably, Schechter Poultry expressly differentiated NIRA from the FTC Act based on distinctions that do not apply in the rulemaking context. Specifically, the Court stated that, despite the similar delegation of authority, unlike NIRA, actions under the FTC Act are subject to an adjudicative process. The Court observed that the commission serves as “a quasi judicial body” and assesses what constitutes unfair methods of competition “in particular instances, upon evidence, in light of particular competitive conditions.” That essential distinction disappears in the case of rulemaking, where the commission acts in a quasi-legislative role and promulgates rules of broad application.

It appears that the nondelegation doctrine may be poised for a revival and may play a significant role in the Supreme Court’s evaluation of expansive attempts by the Biden administration to exercise legislative-type authority without explicit congressional authorization and guidance. This would create a challenging backdrop for the FTC to attempt aggressive new UMC rulemaking.

Antitrust Rulemaking by FTC Is Likely to Lead to Inefficient Outcomes and Institutional Conflicts

Aside from the doubts raised by these significant statutory and constitutional issues as to the legality of competition rulemaking by the FTC, there are also several policy and institutional factors counseling against legislative-style antitrust rulemaking.

Legislative Rulemaking on Competition Issues Runs Contrary to the Purpose of Antitrust Law

The core of U.S. antitrust law is based on broadly drafted statutes that, at least for violations outside the criminal-conspiracy context, leave determinations of likely anticompetitive effects, procompetitive justifications, and ultimate liability up to factfinders charged with highly detailed, case-specific determinations. Although no factfinder is infallible, this requirement for highly fact-bound analysis helps to ensure that each case’s outcome has a high likelihood of preserving or increasing consumer welfare.

Legislative rulemaking would replace this quintessential fact-based process with one-size-fits-all bright-line rules. Competition rules would function like per se prohibitions, but based on notice-and-comment procedures, rather than the broad and longstanding legal and economic consensus usually required for per se condemnation under the Sherman Act. Past experience with similar regulatory regimes should give reason for pause here: the Interstate Commerce Commission, for example, failed to efficiently regulate the railroad industry before being abolished with bipartisan consensus in 1996, costing consumers, by some estimates, as much as several billion (in today’s) dollars annually in lost competitive benefits. As FTC Commissioner Christine Wilson observes, regulatory rules “frequently stifle innovation, raise prices, and lower output and quality without producing concomitant health, safety, and other benefits for consumers.” By sacrificing the precision of case-by-case adjudication, rulemaking advocates are also losing one of the best tools we have to account for “market dynamics, new sources of competition, and consumer preferences.”

Potential for Institutional Conflict with DOJ

In addition to these substantive concerns, UMC rulemaking by the FTC would also create institutional conflicts between the FTC and DOJ and lead to divergence between the legal standards applicable to the FTC Act, on the one hand, and the Sherman and Clayton acts, on the other. At present, courts have interpreted the FTC Act to be generally coextensive with the prohibitions on unlawful mergers and anticompetitive conduct under the Sherman and Clayton acts, with the limited exception of invitations to collude. But because the FTC alone has the authority to enforce the FTC Act, and rulemaking by the FTC would be limited to interpretations of that act (and could not directly affect or repeal caselaw interpreting the Sherman and Clayton acts), it would create two separate standards of liability. Given that the FTC and DOJ historically have divided enforcement between the agencies based on the industry at issue, this could result in different rules of conduct, depending on the industry involved. Types of conduct that have the potential for anticompetitive effects under certain circumstances but generally pass a rule-of-reason analysis could nonetheless be banned outright if the industry is subject to FTC oversight. Dissonance between the two federal enforcement agencies would be even more difficult for companies not falling firmly within either agency’s purview; those entities would lack certainty as to which guidelines to follow: rule-of-reason precedent or FTC rules.

Conclusion

Following its rebuke at the Supreme Court in the AMG Capital Management case, now is the time for the FTC to focus on its core, case-by-case administrative mission, taking full advantage of its unique adjudicative expertise. Broad unfair methods of competition rulemaking, however, would be an aggressive step in the wrong direction—away from FTC’s core mission and toward a no-man’s-land far afield from the FTC’s governing statutes.

Biden administration enforcers at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) have prioritized labor-market monopsony issues for antitrust scrutiny (see, for example, here and here). This heightened interest comes in light of claims that labor markets are highly concentrated and are rife with largely neglected competitive problems that depress workers’ income. Such concerns are reflected in a March 2022 U.S. Treasury Department report on “The State of Labor Market Competition.”

Monopsony is the “flip side” of monopoly and U.S. antitrust law clearly condemns agreements designed to undermine the “buyer side” competitive process (see, for example, this U.S. government submission to the OECD). But is a special new emphasis on labor markets warranted, given that antitrust enforcers ideally should seek to allocate their scarce resources to the most pressing (highest valued) areas of competitive concern?

A May 2022 Information Technology & Innovation (ITIF) study from ITIF Associate Director (and former FTC economist) Julie Carlson indicates that the degree of emphasis the administration’s antitrust enforcers are placing on labor issues may be misplaced. In particular, the ITIF study debunks the Treasury report’s findings of high levels of labor-market concentration and the claim that workers face a “decrease in wages [due to labor market power] at roughly 20 percent relative to the level in a fully competitive market.” Furthermore, while noting the importance of DOJ antitrust prosecutions of hard-core anticompetitive agreements among employers (wage-fixing and no-poach agreements), the ITIF report emphasizes policy reforms unrelated to antitrust as key to improving workers’ lot.

Key takeaways from the ITIF report include:

  • Labor markets are not highly concentrated. Local labor-market concentration has been declining for decades, with the most concentrated markets seeing the largest declines.
  • Labor-market power is largely due to labor-market frictions, such as worker preferences, search costs, bargaining, and occupational licensing, rather than concentration.
  • As a case study, changes in concentration in the labor market for nurses have little to no effect on wages, whereas nurses’ preferences over job location are estimated to lead to wage markdowns of 50%.
  • Firms are not profiting at the expense of workers. The decline in the labor share of national income is primarily due to rising home values, not increased labor-market concentration.
  • Policy reform should focus on reducing labor-market frictions and strengthening workers’ ability to collectively bargain. Policies targeting concentration are misguided and will be ineffective at improving outcomes for workers.

The ITIF report also throws cold water on the notion of emphasizing labor-market issues in merger reviews, which was teed up in the January 2022 joint DOJ/FTC request for information (RFI) on merger enforcement. The ITIF report explains:

Introducing the evaluation of labor market effects unnecessarily complicates merger review and needlessly ties up agency resources at a time when the agencies are facing severe resource constraints.48 As discussed previously, labor markets are not highly concentrated, nor is labor market concentration a key factor driving down wages.

A proposed merger that is reportable to the agencies under the Hart-Scott-Rodino Act and likely to have an anticompetitive effect in a relevant labor market is also likely to have an anticompetitive effect in a relevant product market. … Evaluating mergers for labor market effects is unnecessary and costly for both firms and the agencies. The current merger guidelines adequately address competition concerns in input markets, so any contemplated revision to the guidelines should not incorporate a “framework to analyze mergers that may lessen competition in labor markets.” [Citation to Request for Information on Merger Enforcement omitted.]

In sum, the administration’s recent pronouncements about highly anticompetitive labor markets that have resulted in severely underpaid workers—used as the basis to justify heightened antitrust emphasis on labor issues—appear to be based on false premises. As such, they are a species of government misinformation, which, if acted upon, threatens to misallocate scarce enforcement resources and thereby undermine efficient government antitrust enforcement. What’s more, an unnecessary overemphasis on labor-market antitrust questions could impose unwarranted investigative costs on companies and chill potentially efficient business transactions. (Think of a proposed merger that would reduce production costs and benefit consumers but result in a workforce reduction by the merged firm.)

Perhaps the administration will take heed of the ITIF report and rethink its plans to ramp up labor-market antitrust-enforcement initiatives. Promoting pro-market regulatory reforms that benefit both labor and consumers (for instance, excessive occupational-licensing restrictions) would be a welfare-superior and cheaper alternative to misbegotten antitrust actions.

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.

As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.

These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes. 

The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement. 

  1. First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility. 
  2. Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized. 
  3. Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty. 

As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions. 

To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract. 

Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration

Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.

As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.

This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.

IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes

Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.

More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.

Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.

The Importance of Intellectual Property Rights for Startup Activity

Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this: 

  1. Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties; 
  2. Patent rights can be the basis for securitization, facilitating access to startup funding; and
  3. Patent rights drive venture capital (VC) investment.

While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here). 

The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities.  In contrast, startup companies—almost by definition—must outsource production.

Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it

SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.

Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis

[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.

In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.

The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions 

The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages. 

IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price. 

In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.

At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.

This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it

Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that: 

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:

What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation. 

Conclusion

In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.

For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely. 

The signatories to the comments are:

Alden F. AbbottSenior Research Fellow, Mercatus Center
George Mason University
Former General Counsel, U.S. Federal Trade Commission
Jonathan BarnettTorrey H. Webb Professor of Law
University of Southern California
Ronald A. CassDean Emeritus, School of Law
Boston University
Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe ColangeloJean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics
University of Basilicata and LUISS (Italy)
Richard A. EpsteinLaurence A. Tisch Professor of Law
New York University
Bowman HeidenExecutive Director, Tusher Initiative at the Haas School of Business
University of California, Berkeley
Justin (Gus) HurwitzProfessor of Law
University of Nebraska
Thomas A. LambertWall Chair in Corporate Law and Governance
University of Missouri
Stan J. LiebowitzAshbel Smith Professor of Economics
University of Texas at Dallas
John E. LopatkaA. Robert Noll Distinguished Professor of Law
Penn State University
Keith MallinsonFounder and Managing Partner
WiseHarbor
Geoffrey A. MannePresident and Founder
International Center for Law & Economics
Adam MossoffProfessor of Law
George Mason University
Kristen Osenga Austin E. Owen Research Scholar and Professor of Law
University of Richmond
Vernon L. SmithGeorge L. Argyros Endowed Chair in Finance and Economics
Chapman University
Nobel Laureate in Economics (2002)
Daniel F. SpulberElinor Hobbs Distinguished Professor of International Business
Northwestern University
David J. TeeceThomas W. Tusher Professor in Global Business
University of California, Berkeley
Joshua D. WrightUniversity Professor of Law
George Mason University
Former Commissioner, U.S. Federal Trade Commission
John M. YunAssociate Professor of Law
George Mason University
Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission 

The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.

Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.

Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.

The RFI Reflects the False Premise that Competition is Declining in the United States

The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:

Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.

This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:

Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.

Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002. 

Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.

In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.

The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship

The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)

What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”

Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).

The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.

For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”

The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.

Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”

Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)

The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”

This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.

By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).

These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.

New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI

The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.

As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):

[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.

It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.

Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”

Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.

One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.

In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.

New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies

The DOJ and FTC could, of course, acknowledge the problem of administrability  and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.

As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.

In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”

New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.

Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.

Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):

When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.

Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.

New Guidelines, if Issued, Should Give Due Credit to Efficiencies

Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.

Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.

Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)

Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.

Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”

She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.

The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.

Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.

Conclusion

If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:

  1. Eschew references to dated and discredited case law;
  2. Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
  3. Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
  4. Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).

Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.

The leading contribution to sound competition policy made by former Assistant U.S. Attorney General Makan Delrahim was his enunciation of the “New Madison Approach” to patent-antitrust enforcement—and, in particular, to the antitrust treatment of standard essential patent licensing (see, for example, here, here, and here). In short (citations omitted):

The New Madison Approach (“NMA”) advanced by former Assistant Attorney General for Antitrust Makan Delrahim is a simple analytical framework for understanding the interplay between patents and antitrust law arising out of standard setting. A key aspect of the NMA is its rejection of the application of antitrust law to the “hold-up” problem, whereby patent holders demand supposedly supra-competitive licensing fees to grant access to their patents that “read on” a standard – standard essential patents (“SEPs”). This scenario is associated with an SEP holder’s prior commitment to a standard setting organization (“SSO”), that is: if its patented technology is included in a proposed new standard, it will license its patents on fair, reasonable, and non-discriminatory (“FRAND”) terms. “Hold-up” is said to arise subsequently, when the SEP holder reneges on its FRAND commitment and demands that a technology implementer pay higher-than-FRAND licensing fees to access its SEPs.

The NMA has four basic premises that are aimed at ensuring that patent holders have adequate incentives to innovate and create welfare-enhancing new technologies, and that licensees have appropriate incentives to implement those technologies:

1. Hold-up is not an antitrust problem. Accordingly, an antitrust remedy is not the correct tool to resolve patent licensing disputes between SEP-holders and implementers of a standard.

2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders in setting the terms of access to patents that cover a new standard.

3. A fundamental element of patent rights is the right to exclude. As such, SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents, by, for example, seeking injunctions.

4. Unilateral and unconditional decisions not to license a patent should be per se legal.

Delrahim emphasizes that the threat of antitrust liability, specifically treble damages, distorts the incentives associated with good faith negotiations with SSOs over patent inclusion. Contract law, he goes on to note, is perfectly capable of providing an ex post solution to licensing disputes between SEP holders and implementers of a standard. Unlike antitrust law, a contract law framework allows all parties equal leverage in licensing negotiations.

As I have explained elsewhere, the NMA is best seen as a set of policies designed to spark dynamic economic growth:

[P]atented technology serves as a catalyst for the wealth-creating diffusion of innovation. This occurs through numerous commercialization methods; in the context of standardized technologies, the development of standards is a process of discovery. At each [SSO], the process of discussion and negotiation between engineers, businesspersons, and all other relevant stakeholders reveals the relative value of alternative technologies and tends to result in the best patents being integrated into a standard.

The NMA supports this process of discovery and implementation of the best patented technology born of the labors of the innovators who created it. As a result, the NMA ensures SEP valuations that allow SEP holders to obtain an appropriate return for the new economic surplus that results from the commercialization of standard-engendered innovations. It recognizes that dynamic economic growth is fostered through the incentivization of innovative activities backed by patents.

In sum, the NMA seeks to promote innovation by offering incentives for SEP-driven technological improvements. As such, it rejects as ill-founded prior Federal Trade Commission (FTC) litigation settlements and Obama-era U.S. Justice Department (DOJ) Antitrust Division policy statements that artificially favored implementor licensees’ interests over those of SEP licensors (see here).

In light of the NMA, DOJ cooperated with the U.S. Patent and Trademark Office and National Institute of Standards and Technology (NIST) in issuing a 2019 SEP Policy Statement clarifying that an SEP holder’s promise to license a patent on fair, reasonable, and non-discriminatory (FRAND) terms does not bar it from seeking any available remedy for patent infringement, including an injunction. This signaled that SEPs and non-SEP patents enjoy equivalent legal status.

DOJ also issued a 2020 supplement to its 2015 Institute of Electrical and Electronics Engineers (IEEE) business review letter. The 2015 letter had found no legal fault with revised IEEE standard-setting policies that implicitly favored implementers of standardized technology over SEP holders. The 2020 supplement characterized key elements of the 2015 letter as “outdated,” and noted that the anti-SEP bias of that document could “harm competition and chill innovation.”   

Furthermore, DOJ issued a July 2019 Statement of Interest before the 9th U.S. Circuit Court of Appeals in FTC v. Qualcomm, explaining that unilateral and unconditional decisions not to license a patent are legal under the antitrust laws. In October 2020, the 9th Circuit reversed a district court decision and rejected the FTC’s monopolization suit against Qualcomm. The circuit court, among other findings, held that Qualcomm had no antitrust duty to license its SEPs to competitors.

Regrettably, the Biden Administration appears to be close to rejecting the NMA and to reinstituting the anti-strong patents SEP-skeptical views of the Obama administration (see here and here). DOJ already has effectively repudiated the 2020 supplement to the 2015 IEEE letter and the 2019 SEP Policy Statement. Furthermore, written responses to Senate Judiciary Committee questions by assistant attorney general nominee Jonathan Kanter suggest support for renewed antitrust scrutiny of SEP licensing. These developments are highly problematic if one supports dynamic economic growth.

Conclusion

The NMA represents a pro-American, pro-growth innovation policy prescription. Its abandonment would reduce incentives to invest in patents and standard-setting activities, to the detriment of the U.S. economy. Such a development would be particularly unfortunate at a time when U.S. Supreme Court decisions have weakened American patent rights (see here); China is taking steps to strengthen Chinese patents and raise incentives to obtain Chinese patents (see here); and China is engaging in litigation to weaken key U.S. patents and undermine American technological leadership (see here).

The rejection of NMA would also be in tension with the logic of the 5th U.S. Circuit Court of Appeals’ 2021 HTC v. Ericsson decision, which held that the non-discrimination portion of the FRAND commitment required Ericsson to give HTC the same licensing terms as given to larger mobile-device manufacturers. Furthermore, recent important European court decisions are generally consistent with NMA principles (see here).

Given the importance of dynamic competition in an increasingly globalized world economy, Biden administration officials may wish to take a closer look at the economic arguments supporting the NMA before taking final action to condemn it. Among other things, the administration might take note that major U.S. digital platforms, which are the subject of multiple U.S. and foreign antitrust enforcement investigations, tend to firmly oppose strong patents rights. As one major innovation economist recently pointed out:

If policymakers and antitrust gurus are so concerned about stemming the rising power of Big Tech platforms, they should start by first stopping the relentless attack on IP. Without the IP system, only the big and powerful have the privilege to innovate[.]

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.