Archives For Merger Guidelines

In the U.S. system of dual federal and state sovereigns, a normative analysis reveals principles that could guide state antitrust-enforcement priorities, to promote complementarity in federal and state antitrust policy, and thereby advance consumer welfare.

Discussion

Positive analysis reveals that state antitrust enforcement is a firmly entrenched feature of American antitrust policy. The U.S. Supreme Court (1) has consistently held that federal antitrust law does not displace state antitrust law (see, for example, California v. ARC America Corp. (U.S., 1989) (“Congress intended the federal antitrust laws to supplement, not displace, state antitrust remedies”)); and (2) has upheld state antitrust laws even when they have some impact on interstate commerce (see, for example, Exxon Corp. v. Governor of Maryland (U.S., 1978)).

The normative question remains, however, as to what the appropriate relationship between federal and state antitrust enforcement should be. Should federal and state antitrust regimes be complementary, with state law enforcement enhancing the effectiveness of federal enforcement? Or should state antitrust enforcement compete with federal enforcement, providing an alternative “vision” of appropriate antitrust standards?

The generally accepted (until very recently) modern American consumer-welfare-centric antitrust paradigm (see here) points to the complementary approach as most appropriate. In other words, if antitrust is indeed the “magna carta” of American free enterprise (see United States v. Topco Associates, Inc., U.S. (U.S. 1972), and if consumer welfare is the paramount goal of antitrust (a position consistently held by the Supreme Court since Reiter v. Sonotone Corp., (U.S., 1979)), it follows that federal and state antitrust enforcement coexist best as complements, directed jointly at maximizing consumer-welfare enhancement. In recent decades it also generally has made sense for state enforcers to defer to U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) matter-specific consumer-welfare assessments. This conclusion follows from the federal agencies’ specialized resource advantage, reflected in large staffs of economic experts and attorneys with substantial industry knowledge.

The reality, nevertheless, is that while state enforcers often have cooperated with their federal colleagues on joint enforcement, state enforcement approaches historically have been imperfectly aligned with federal policy. That imperfect alignment has been at odds with consumer welfare in key instances. Certain state antitrust schemes, for example, continue to treat resale price maintenance (RPM)  as per se illegal (see, for example, here), a position inconsistent with the federal consumer welfare-centric rule of reason approach (see Leegin Creative Leather Products, Inc. v. PSKS, Inc. (U.S., 2007)). The disparate treatment of RPM has a substantial national impact on business conduct, because commercially important states such as California and New York are among those that continue to flatly condemn RPM.

State enforcers also have from time to time sought to oppose major transactions that received federal antitrust clearance, such as several states’ unsuccessful opposition to the merger of Sprint and T-Mobile merger (see here). Although the states failed to block the merger, they did extract settlement concessions that imposed burdens on the merging parties, in addition to the divestiture requirements impose by the DOJ in settling the matter (see here). Inconsistencies between federal and state antitrust-enforcement decisions on cases of nationwide significance generate litigation waste and may detract from final resolutions that optimize consumer welfare.

If consumer-welfare optimization is their goal (which I believe it should be in an ideal world), state attorneys general should seek to direct their limited antitrust resources to their highest valued uses, rather than seeking to second guess federal antitrust policy and enforcement decisions.

An optimal approach might focus first and foremost on allocating state resources to combat primarily intrastate competitive harms that are clear and unequivocal (such as intrastate bid rigging, hard core price fixing, and horizontal market division). This could free up federal resources to focus on matters that are primarily interstate in nature, consistent with federalism. (In this regard, see a thoughtful proposal by D. Bruce Johnsen and Moin A. Yaha.)

Second, state enforcers could also devote some resources to assist federal enforcers in developing state-specific evidence in support of major national cases. (This would allow state attorneys general to publicize their “big case” involvement in a productive manner.)

Third, but not least, competition advocacy directed at the removal of anticompetitive state laws and regulations could prove an effective means of seeking to improve the competitive climate within individual states (see, for example, here). State antitrust enforcers could advance advocacy through amicus curiae briefs, and (where politically feasible) through interventions (perhaps informal) with peer officials who oversee regulation. Subject to this general guidance, the nature of state antitrust resource allocations would depend upon the specific competitive problems particular to each state.

Of course, in the real world, public choice considerations and rent seeking may at times influence antitrust enforcement decision-making by state (and federal) officials. Nonetheless, the capsule idealized normative summary of a suggested ideal state antitrust-enforcement protocol is useful in that it highlights how state enforcers could usefully complement (assumed) sound federal antitrust initiatives.

Great minds think alike. A well-crafted and much more detailed normative exploration of ideal state antitrust enforcement is found in a recently released Pelican Institute policy brief by Ted Bolema and Eric Peterson. Entitled The Proper Role for States in Antitrust Lawsuits, the brief concludes (in a manner consistent with my observations):

This review of cases and leading commentaries shows that states should focus their involvement in antitrust cases on instances where:

· they have unique interests, such as local price-fixing

· play a unique role, such as where they can develop evidence about how alleged anticompetitive behavior uniquely affects local markets

· they can bring additional resources to bear on existing federal litigation.

States can also provide a useful check on overly aggressive federal enforcement by providing courts with a traditional perspective on antitrust law — a role that could become even more important as federal agencies aggressively seek to expand their powers. All of these are important roles for states to play in antitrust enforcement, and translate into positive outcomes that directly benefit consumers.

Conversely, when states bring significant, novel antitrust lawsuits on their own, they don’t tend to benefit either consumers or constituents. These novel cases often move resources away from where they might be used more effectively, and states usually lose (as with the recent dismissal with prejudice of a state case against Facebook). Through more strategic antitrust engagement, with a focus on what states can do well and where they can make a positive difference antitrust enforcement, states would best serve the interests of their consumers, constituents, and taxpayers.

Conclusion

Under a consumer-welfare-centric regime, an appropriate role can be identified for state antitrust enforcement that would helpfully complement federal efforts in an optimal fashion. Unfortunately, in this tumultuous period of federal antitrust policy shifts, in which the central role of the consumer welfare standard has been called into question, it might appear fatuous to speculate on the ideal melding of federal and state approaches to antitrust administration. One should, however, prepare for the time when a more enlightened, economically informed approach will be reinstituted. In anticipation of that day, serious thinking about antitrust federalism should not be neglected.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

Large group of people in the shape of two puzzle pieces on a white background.

The Federal Trade Commission (FTC) has taken another step away from case-specific evaluation of proposed mergers and toward an ex ante regulatory approach in its Oct. 25 “Statement of the Commission on Use of Prior Approval Provisions in Merger Orders.” Though not unexpected, this unfortunate initiative once again manifests the current FTC leadership’s disdain for long-accepted economically sound antitrust-enforcement principles.

Discussion

High levels of merger activity should, generally speaking, be viewed as a symptom of a vibrant economy, not a reason for economic concern. Horizontal mergers typically are driven by the potential to realize real cost savings, unrelated to anticompetitive reductions in output.

Non-horizontal mergers often put into force welfare-enhancing reductions of double marginalization, while uniting complements and achieving synergies in ways that seek efficiencies. More generally, proposed acquisitions frequently reflect an active market for corporate control that seeks to reallocate scarce resources to higher-valued uses (see, for example, Henry Manne’s seminal article on “Mergers and the Market for Corporate Control”). Finally, by facilitating cost reductions, synergies, and improvements in resource allocations within firms, mergers may allow the new consolidated entity to compete more effectively in the marketplace, thereby enhancing competition.

Given the economic benefits frequently generated by mergers, government antitrust enforcers should not discourage them, nor should they intervene to block them, absent a strong showing that a particular transaction would likely reduce competition and harm consumer welfare. In the United States, the Hart-Scott-Rodino Premerger Notification Act of 1976 (HSR) and its implementing regulations generally have reflected this understanding. They have done this by requiring that proposed transactions above a certain size threshold be notified to the FTC and the U.S. Justice Department (DOJ), and by providing a framework for timely review, allowing most notified mergers to close promptly.

In the relatively few cases where agency enforcement staff have identified competitive problems, the HSR framework usually has enabled timely negotiation of possible competitive fixes (divestitures and, less typically, behavioral remedies). Where fixes have not been feasible, filing parties generally have been able to decide whether to drop a transaction or prepare for litigation within a reasonable time period. Under the HSR framework, enforcers generally have respected the time sensitivity of merger proposals and acted expeditiously (with a few exceptions) to review complicated and competitively sensitive transactions. The vast majority of HSR filings that facially raise no plausible competitive issues historically have been dealt with swiftly—often through “early termination” policies that provide the merging parties an antitrust go-ahead well before the end of HSR’s initial 30-day review period.

In short, although far from perfect, HSR processes have sought to minimize regulatory impediments to merger activity, consistent with the statutory mandate to identify and prevent anticompetitive mergers.      

Regrettably, under the leadership of Chair Lina M. Khan, the FTC has taken unprecedented steps to undermine the well-understood HSR framework. As I wrote recently:

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and

2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

The FTC’s merger-review reign of error continues. Most recently, it released a policy guidance statement that effectively transforms the commission into a merger regulator whose assent is required for a specific category of mergers. This policy is at odds with HSR, which is designed to facilitate merger reviews, not to serve as a regulatory-approval mechanism. As the FTC explains in its Oct. 25 statement(citation to 1995 Statement omitted) (approved by a 3-2 vote, with Commissioners Noah Joshua Phillips and Christine S. Wilson dissenting):

On July 21, 2021, the Commission voted to rescind the 1995 Policy Statement on Prior Approval and Prior Notice Provisions (“1995 Statement”). The 1995 Statement ended the Commission’s then-longstanding practice of incorporating prior approval and prior notice provisions in Commission orders addressing mergers. With the rescission of the 1995 statement, the Commission returns now to its prior practice of routinely requiring merging parties subject to a Commission order to obtain prior approval from the FTC before closing any future transaction affecting each relevant market for which a violation was alleged. . . .

In addition, from now on, in matters where the Commission issues a complaint to block a merger and the parties subsequently abandon the transaction, the agency will engage in a case-specific determination as to whether to pursue a prior approval order, focusing on the factors identified below with respect to use of broader prior approval provisions. The fact that parties may abandon a merger after litigation commences does not guarantee that the

Commission will not subsequently pursue an order incorporating a prior approval provision. . . .
In some situations where stronger relief is needed, the Commission may decide to seek a prior approval provision that covers product and geographic markets beyond just the relevant product and geographic markets affected by the merger. No single factor is dispositive; rather, the Commission will take a holistic view of the circumstances when determining the length and breadth of prior approval provisions. [Six factors listed include the nature of the transaction; the level of market concentration; the degree to which the transaction increases concentration; the degree to which one of the parties pre-merger likely had market power; the parties’ history of acquisitiveness; and evidence of anticompetitive market dynamics.]

The Oct. 25 Statement is highly problematic in several respects. Its oversight requirements may discourage highly effective consent decree “fixes” of potential mergers, leading to wasteful litigation—or, alternatively, the abandonment of efficient transactions. What’s more, the threat of FTC prior approval orders (based on multiple criteria subject to manipulation by the FTC), even when parties abandon a proposed transaction (and thus, effectively have “done nothing”), smacks of unwarranted regulation of future corporate plans of disfavored firms, raising questions of fundamental fairness.

All told, the new requirements, combined with the FTC’s policies to end early terminations and to stop “greenlighting” routine merger transactions after a 30-day review, are yet signs that the well-understood HSR consensus has been unilaterally abandoned by the FTC, based on purely partisan commission votes, despite the lack of any public consultation. The FTC’s abrupt and arbitrary merger-review-related actions will harm the economy by discouraging welfare-promoting consolidations. These actions also fly in the face of sound public administration.  

Conclusion

The FTC continues to move from its historic role of antitrust enforcer to that of antitrust regulator at warp speed, based on a series of 3-2 votes. In particular, the commission’s abandonment of a well-established bipartisan approach to HSR policy is particularly troublesome, given the new risks it creates for private parties considering acquisitions. These new risks will likely deter an unknown number of efficiency-enhancing, innovative combinations that could have benefited consumers and substantially strengthened the American economy.

Perhaps the imminent confirmation of Jonathan Kanter—an individual with many years of practical experience as a leading antitrust practitioner—to be assistant attorney general for antitrust will bring a more reasonable perspective to antitrust agency HSR policies. It may even convince a majority of the commission to return to the bipartisan HSR merger-review framework that has served the American economy well.

If not, perhaps congressional overseers might wish to investigate the implications for the American innovation economy and the rule of law stemming from the FTC’s de facto abandonment of HSR principles. Whether to fundamentally alter merger-review procedures should be up to Congress, not to three unelected officials.    

Federal Trade Commission (FTC) Chair Lina Khan’s Sept. 22 memorandum to FTC commissioners and staff—entitled “Vision and Priorities for the FTC” (VP Memo)—offers valuable insights into the chair’s strategy and policy agenda for the commission. Unfortunately, it lacks an appreciation for the limits of antitrust and consumer-protection law; it also would have benefited from greater regulatory humility. After summarizing the VP Memo’s key sections, I set forth four key takeaways from this rather unusual missive.

Introduction

The VP Memo begins appropriately enough, with praise for commission staff and a call to focus on key FTC strategic priorities and operational objectives. So far, so good. Regrettably, the introductory section is the memo’s strongest feature.

Strategic Approach

The VP Memo’s first substantive section, which lays out Khan’s strategic approach, raises questions that require further clarification.

This section is long on glittering generalities. First, it begins with the need to take a “holistic approach” that recognizes law violations harm workers and independent businesses, as well as consumers. Legal violations that reflect “power asymmetries” and harm to “marginalized communities” are emphasized, but not defined. Are new enforcement standards to supplement or displace consumer welfare enhancement being proposed?

Second, similar ambiguity surrounds the need to target enforcement efforts toward “root causes” of unlawful conduct, rather than “one-off effects.” Root causes are said to involve “structural incentives that enable unlawful conduct” (such as conflicts of interest, business models, or structural dominance), as well as “upstream” examination of firms that profit from such conduct. How these observations may be “operationalized” into case-selection criteria (and why these observations are superior to alternative means for spotting illegal behavior) is left unexplained.

Third, the section endorses a more “rigorous and empiricism-driven approach” to the FTC’s work, a “more interdisciplinary approach” that incorporates “a greater range of analytical tools and skillsets.” This recommendation is not problematic on its face, though it is a bit puzzling. The FTC already relies heavily on economics and empirical work, as well as input from technologists, advertising specialists, and other subject matter experts, as required. What other skillsets are being endorsed? (A more far-reaching application of economic thinking in certain consumer-protection cases would be helpful, but one suspects that is not the point of the paragraph.)

Fourth, the need to be especially attentive to next-generation technologies, innovations, and nascent industries is trumpeted. Fine, but the FTC already does that in its competition and consumer-protection investigations.

Finally, the need to “democratize” the agency is highlighted, to keep the FTC in tune with “the real problems that Americans are facing in their daily lives and using that understanding to inform our work.” This statement seems to imply that the FTC is not adequately dealing with “real problems.” The FTC, however, has not been designated by Congress to be a general-purpose problem solver. Rather, the agency has a specific statutory remit to combat anticompetitive activity and unfair acts or practices that harm consumers. Ironically, under Chair Khan, the FTC has abruptly implemented major changes in key areas (including rulemaking, the withdrawal of guidance, and merger-review practices) without prior public input or consultation among the commissioners (see, for example, here)—actions that could be deemed undemocratic.

Policy Priorities

The memo’s brief discussion of Khan’s policy priorities raises three significant concerns.

First, Khan stresses the “need to address rampant consolidation and the dominance that it has enabled across markets” in the areas of merger enforcement and dominant-firm scrutiny. The claim that competition has substantially diminished has been critiqued by leading economists, and is dubious at best (see, for example, here). This flat assertion is jarring, and in tension with the earlier call for more empirical analysis. Khan’s call for revision of the merger guidelines (presumably both horizontal and vertical), in tandem with the U.S. Justice Department (DOJ), will be headed for trouble if it departs from the economic reasoning that has informed prior revisions of those guidelines. (The memo’s critical and cryptic reference to the “narrow and outdated framework” of recent guidelines provides no clue as to the new guidelines format that Chair Khan might deem acceptable.) 

Second, the chair supports prioritizing “dominant intermediaries” and “extractive business models,” while raising concerns about “private equity and other investment vehicles” that “strip productive capacity” and “target marginalized communities.” No explanation is given as to why such prioritization will best utilize the FTC’s scarce resources to root out harmful anticompetitive behavior and consumer-protection harms. By assuming from the outset that certain “unsavory actors” merit prioritization, this discussion also is in tension with an empirical approach that dispassionately examines the facts in determining how resources should best be allocated to maximize the benefits of enforcement.

Third, the chair wants to direct special attention to “one-sided contract provisions” that place “[c]onsumers, workers, franchisees, and other market participants … at a significant disadvantage.” Non-competes, repair restrictions, and exclusionary clauses are mentioned as examples. What is missing is a realistic acknowledgement of the legal complications that would be involved in challenging such provisions, and a recognition of possible welfare benefits that such restraints could generate under many circumstances. In that vein, mere perceived inequalities in bargaining power alluded to in the discussion do not, in and of themselves, constitute antitrust or consumer-protection violations.

Operational Objectives

The closing section, on “operational objectives,” is not particularly troublesome. It supports an “integrated approach” to enforcement and policy tools, and endorses “breaking down silos” between competition (BC) and consumer-protection (BCP) staff. (Of course, while greater coordination between BC and BCP occasionally may be desirable, competition and consumer-protection cases will continue to feature significant subject matter and legal differences.) It also calls for greater diversity in recruitment and a greater staffing emphasis on regional offices. Finally, it endorses bringing in more experts from “outside disciplines” and more rigorous analysis of conduct, remedies, and market studies. These points, although not controversial, do not directly come to grip with questions of optimal resource allocation within the agency, which the FTC will have to address.

Evaluating the VP Memo: 4 Key Takeaways

The VP Memo is a highly aggressive call-to-arms that embodies Chair Khan’s full-blown progressive vision for the FTC. There are four key takeaways:

  1. Promoting the consumer interest, which for decades has been the overarching principle in both FTC antitrust and consumer-protection cases (which address different sources of consumer harm), is passé. Protecting consumers is only referred to in passing. Rather, the concerns of workers, “honest businesses,” and “marginalized communities” are emphasized. Courts will, however, continue to focus on established consumer-welfare and consumer-harm principles in ruling on antitrust and consumer-protection cases. If the FTC hopes to have any success in winning future cases based on novel forms of harm, it will have to ensure that its new case-selection criteria also emphasize behavior that harms consumers.
  2. Despite multiple references to empiricism and analytical rigor, the VP Memo ignores the potential economic-welfare benefits of the categories of behavior it singles out for condemnation. The memo’s critiques of “middlemen,” “gatekeepers,” “extractive business models,” “private equity,” and various types of vertical contracts, reference conduct that frequently promotes efficiency, generating welfare benefits for producers and consumers. Even if FTC lawsuits or regulations directed at these practices fail, the business uncertainty generated by the critiques could well disincentivize efficient forms of conduct that spark innovation and economic growth.
  3. The VP Memo in effect calls for new enforcement initiatives that challenge conduct different in nature from FTC cases brought in recent decades. This implicit support for lawsuits that would go well beyond existing judicial interpretations of the FTC’s competition and consumer-protection authority reflects unwarranted hubris. This April, in the AMG case, the U.S. Supreme Court unanimously rejected the FTC’s argument that it had implicit authority to obtain monetary relief under Section 13(b) of the FTC Act, which authorizes permanent injunctions – despite the fact that several appellate courts had found such authority existed. The Court stated that the FTC could go to Congress if it wanted broader authority. This decision bodes ill for any future FTC efforts to expand its authority into new realms of “unfair” activity through “creative” lawyering.
  4. Chair Khan’s unilateral statement of her policy priorities embodied in the VP Memo bespeaks a lack of humility. It ignores a long history of consensus FTC statements on agency priorities, reflected in numerous commission submissions to congressional committees in connection with oversight hearings. Although commissioners have disagreed on specific policy statements or enforcement complaints, general “big picture” policy statements to congressional overseers typically have been by unanimous vote. By ignoring this tradition, the VP Memo departs from a longstanding bipartisan tradition that will tend to undermine the FTC’s image as a serious deliberative body that seeks to reconcile varying viewpoints (while recognizing that, at times, different positions will be expressed on particular matters). If the FTC acts more and more like a one-person executive agency, why does it need to be “independent,” and, indeed, what special purpose does it serve as a second voice on federal antitrust matters? Under seeming unilateral rule, the prestige of the FTC before federal courts may suffer, undermining its effectiveness in defending enforcement actions and promulgating rules. This will particularly be the case if more and more FTC decisions are taken by a 3-2 vote and appear to reflect little or no consultation with minority commissioners.

Conclusion

The VP Memo reflects a lack of humility and strategic insight. It sets forth priorities that are disconnected from the traditional core of the FTC’s consumer-welfare-centric mission. It emphasizes new sorts of initiatives that are likely to “crash and burn” in the courts, unless they are better anchored to established case law and FTC enforcement principles. As a unilateral missive announcing an unprecedented change in policy direction, the memo also undermines the tradition of collegiality and reasoned debate that generally has characterized the commission’s activities in recent decades.

As such, the memo will undercut, not advance, the effectiveness of FTC advocacy before the courts. It will also undermine the FTC’s reputation as a truly independent deliberative body. Accordingly, one may hope that Chair Khan will rethink her approach, withdraw the VP Memo, and work with all of her fellow commissioners to recraft a new consensus policy document.   

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication. 

The Risks of Antitrust by Anecdote

The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm. 

The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.  

While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.

The Specter of No-Fault Antitrust Liability

The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.  

Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.

Remembering Why Market Power Matters

To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices.  The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.  

Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.  

It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle. 

This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.

The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”

To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:   

  1. What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
  2. What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
  3. In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
  4. How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
  5. What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
  6. What types of remedies would work in the cases to which those theories are applied?
  7. What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?

My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:

Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers. 

While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.   

Discussion

Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.

Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.

Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.

Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.

While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:

More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.

Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.

With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.

Conclusion

U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies. 

PHOTO: C-Span

Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints. 

Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.

This is a critical time for the FTC. It has lost a number of high-profile lawsuits and is preparing to expand its rulemaking powers to regulate things like employment contracts and businesses’ use of data. Khan has also argued in favor of additional rulemaking powers around “unfair methods of competition.”

As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee. 

The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined. 

In fact, the bill’s scope is so broad that some have argued that the FTC simply would not challenge “innocuous self-preferencing” like, say, Apple pre-installing Apple Music on iPhones. Economist Hal Singer has defended the proposals on the grounds that, “Due to limited resources, not all platform integration will be challenged.” 

But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively. 

Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.

The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.

This approach is common in countries with weak rule of law. Anti-corruption laws are frequently used to punish opponents of the regime in China, who probably are also corrupt, but are prosecuted because they have challenged the regime in some way. Hong Kong’s National Security law has also been used to target peaceful protestors and critical media thanks to its vague and overly broad drafting. 

Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’

The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC. 

This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.

Khan, as a critic of the Consumer Welfare Standard, believes that antitrust ought to be used to pursue non-economic objectives, including “the dispersion of political and economic control.” She, and the FTC under her, may wish to use this discretionary power to prosecute firms that she feels are hurting society for unrelated reasons, such as because of political stances they have (or have not) taken.

Khan’s fellow commissioner, Rebecca Kelly Slaughter, has argued that antitrust should be “antiracist”; that “as long as Black-owned businesses and Black consumers are systematically underrepresented and disadvantaged, we know our markets are not fair”; and that the FTC should consider using its existing rulemaking powers to address racist practices. These may be desirable goals, but their application would require contentious value judgements that lawmakers may not want the FTC to make.

Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:

“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]

Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.

It is concerning enough that this legislative package would prohibit conduct that is good for consumers, and that actually increases the competition faced by Big Tech firms. Congress should understand that it also gives extensive discretionary powers to an agency intent on using them to pursue broad, political goals. If Khan’s appointment as chair was a surprise, what her FTC does with the new powers given to her by Congress should not be.

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Bad Blood at the FTC

Thom Lambert —  9 June 2021

John Carreyrou’s marvelous book Bad Blood chronicles the rise and fall of Theranos, the one-time Silicon Valley darling that was revealed to be a house of cards.[1] Theranos’s Svengali-like founder, Elizabeth Holmes, convinced scores of savvy business people (mainly older men) that her company was developing a machine that could detect all manner of maladies from a small quantity of a patient’s blood. Turns out it was a fraud. 

I had a couple of recurring thoughts as I read Bad Blood. First, I kept thinking about how Holmes’s fraud might impair future medical innovation. Something like Theranos’s machine would eventually be developed, I figured, but Holmes’s fraud would likely set things back by making investors leery of blood-based, multi-disease diagnostics.

I also had a thought about the causes of Theranos’s spectacular failure. A key problem, it seemed, was that the company tried to do too many things at once: develop diagnostic technologies, design an elegant machine (Holmes was obsessed with Steve Jobs and insisted that Theranos’s machine resemble a sleek Apple device), market the product, obtain regulatory approval, scale the operation by getting Theranos machines in retail chains like Safeway and Walgreens, and secure third-party payment from insurers.

A thought that didn’t occur to me while reading Bad Blood was that a multi-disease blood diagnostic system would soon be developed but would be delayed, or possibly even precluded from getting to market, by an antitrust enforcement action based on things the developers did to avoid the very problems that doomed Theranos. 

Sadly, that’s where we are with the Federal Trade Commission’s misguided challenge to the merger of Illumina and Grail.

Founded in 1998, San Diego-based Illumina is a leading provider of products used in genetic sequencing and genomic analysis. Illumina produces “next generation sequencing” (NGS) platforms that are used for a wide array of applications (genetic tests, etc.) developed by itself and other companies.

In 2015, Illumina founded Grail for the purpose of developing a blood test that could detect cancer in asymptomatic individuals—the “holy grail” of cancer diagnosis. Given the superior efficacy and lower cost of treatments for early- versus late-stage cancers, success by Grail could save millions of lives and billions of dollars.

Illumina created Grail as a separate entity in which it initially held a controlling interest (having provided the bulk of Grail’s $100 million Series A funding). Legally separating Grail in this fashion, rather than running it as an Illumina division, offered a number of benefits. It limited Illumina’s liability for Grail’s activities, enabling Grail to take greater risks. It mitigated the Theranos problem of managers’ being distracted by too many tasks: Grail managers could concentrate exclusively on developing a viable cancer-screening test, while Illumina’s management continued focusing on that company’s core business. It made it easier for Grail to attract talented managers, who would rather come in as corporate officers than as division heads. (Indeed, Grail landed Jeff Huber, a high-profile Google executive, as its initial CEO.) Structuring Grail as a majority-owned subsidiary also allowed Illumina to attract outside capital, with the prospect of raising more money in the future by selling new Grail stock to investors.

In 2017, Grail did exactly that, issuing new shares to investors in exchange for $1 billion. While this capital infusion enabled the company to move forward with its promising technologies, the creation of new shares meant that Illumina no longer held a controlling interest in the firm. Its ownership interest dipped below 20 percent and now stands at about 14.5 percent of Grail’s voting shares.  

Setting up Grail so as to facilitate outside capital formation and attract top managers who could focus single-mindedly on product development has paid off. Grail has now developed a blood test that, when processed on Illumina’s NGS platform, can accurately detect a number of cancers in asymptomatic individuals. Grail predicts that this “liquid biopsy,” called Galleri, will eventually be able to detect up to 50 cancers before physical symptoms manifest. Grail is also developing other blood-based cancer tests, including one that confirms cancer diagnoses in patients suspected to have cancer and another designed to detect cancer recurrence in patients who have undergone treatment.

Grail now faces a host of new challenges. In addition to continuing to develop its tests, Grail needs to:  

  • Engage in widespread testing of its cancer-detection products on up to 50 different cancers;
  • Process and present the information from its extensive testing in formats that will be acceptable to regulators;
  • Navigate the pre-market regulatory approval process in different countries across the globe;
  • Secure commitments from third-party payors (governments and private insurers) to provide coverage for its tests;
  • Develop means of manufacturing its products at scale;
  • Create and implement measures to ensure compliance with FDA’s Quality System Regulation (QSR), which governs virtually all aspects of medical device production (design, testing, production, process controls, quality assurance, labeling, packaging, handling, storage, distribution, installation, servicing, and shipping); and
  • Market its tests to hospitals and health-care professionals.

These steps are all required to secure widespread use of Grail’s tests. And, importantly, such widespread use will actually improve the quality of the tests. Grail’s tests analyze the DNA in a patient’s blood to look for methylation patterns that are known to be associated with cancer. In essence, the tests work by comparing the methylation patterns in a test subject’s DNA against a database of genomic data collected from large clinical studies. With enough comparison data, the tests can indicate not only the presence of cancer but also where in the body the cancer signal is coming from. And because Grail’s tests use machine learning to hone their algorithms in response to new data collected from test usage, the greater the use of Grail’s tests, the more accurate, sensitive, and comprehensive they become.     

To assist with the various tasks needed to achieve speedy and widespread use of its tests, Grail decided to reunite with Illumina. In September 2020, the companies entered a merger agreement under which Illumina would acquire the 85.5 percent of Grail voting shares it does not already own for cash and stock worth $7.1 billion and additional contingent payments of $1.2 billion to Grail’s non-Illumina shareholders.

Recombining with Illumina will allow Grail—which has appropriately focused heretofore solely on product development—to accomplish the tasks now required to get its tests to market. Illumina has substantial laboratory capacity that Grail can access to complete the testing needed to refine its products and establish their effectiveness. As the leading global producer of NGS platforms, Illumina has unparalleled experience in navigating the regulatory process for NGS-related products, producing and marketing those products at scale, and maintaining compliance with complex regulations like FDA’s QSR. With nearly 3,000 international employees located in 26 countries, it has obtained regulatory authorizations for NGS-based tests in more than 50 jurisdictions around the world.  It also has long-standing relationships with third-party payors, health systems, and laboratory customers. Grail, by contrast, has never obtained FDA approval for any products, has never manufactured NGS-based tests at scale, has only a fledgling regulatory affairs team, and has far less extensive contacts with potential payors and customers. By remaining focused on its key objective (unlike Theranos), Grail has achieved product-development success. Recombining with Illumina will now enable it, expeditiously and efficiently, to deploy its products across the globe, generating user data that will help improve the products going forward.

In addition to these benefits, the combination of Illumina and Grail will eliminate a problem that occurs when producers of complementary products each operate in markets that are not fully competitive: double marginalization. When sellers of products that are used together each possess some market power due to a lack of competition, their uncoordinated pricing decisions may result in less surplus for each of them and for consumers of their products. Combining so that they can coordinate pricing will leave them and their customers better off.

Unlike a producer participating in a competitive market, a producer that faces little competition can enhance its profits by raising its price above its incremental cost.[2] But there are limits on its ability to do so. As the well-known monopoly pricing model shows, even a monopolist has a “profit-maximizing price” beyond which any incremental price increase would lose money.[3] Raising price above that level would hurt both consumers and the monopolist.

When consumers are deciding whether to purchase products that must be used together, they assess the final price of the overall bundle. This means that when two sellers of complementary products both have market power, there is an above-cost, profit-maximizing combined price for their products. If the complement sellers individually raise their prices so that the combined price exceeds that level, they will reduce their own aggregate welfare and that of their customers.

This unfortunate situation is likely to occur when market power-possessing complement producers are separate companies that cannot coordinate their pricing. In setting its individual price, each separate firm will attempt to capture as much surplus for itself as possible. This will cause the combined price to rise above the profit-maximizing level. If they could unite, the complement sellers would coordinate their prices so that the combined price was lower and the sellers’ aggregate profits higher.

Here, Grail and Illumina provide complementary products (cancer-detection tests and the NGS platforms on which they are processed), and each faces little competition. If they price separately, their aggregate prices are likely to exceed the profit-maximizing combined price for the cancer test and NGS platform access. If they combine into a single firm, that firm would maximize its profits by lowering prices so that the aggregate test/platform price is the profit-maximizing combined price.  This would obviously benefit consumers.

In light of the social benefits the Grail/Illumina merger offers—speeding up and lowering the cost of getting Grail’s test approved and deployed at scale, enabling improvement of the test with more extensive user data, eliminating double marginalization—one might expect policymakers to cheer the companies’ recombination. The FTC, however, is trying to block it.  In late March, the commission brought an action claiming that the merger would violate Section 7 of the Clayton Act by substantially reducing competition in a line of commerce.

The FTC’s theory is that recombining Illumina and Grail will impair competition in the market for “multi-cancer early detection” (MCED) tests. The commission asserts that the combined company would have both the opportunity and the motivation to injure rival producers of MCED tests.

The opportunity to do so would stem from the fact that MCED tests must be processed on NGS platforms, which are produced exclusively by Illumina. Illumina could charge Grail’s rivals or their customers higher prices for access to its NGS platforms (or perhaps deny access altogether) and could withhold the technical assistance rivals would need to secure both regulatory approval of their tests and coverage by third-party payors.

But why would Illumina take this tack, given that it would be giving up profits on transactions with producers and users of other MCED tests? The commission asserts that the losses a combined Illumina/Grail would suffer in the NGS platform market would be more than offset by gains stemming from reduced competition in the MCED test market. Thus, the combined company would have a motive, as well as an opportunity, to cause anticompetitive harm.

There are multiple problems with the FTC’s theory. As an initial matter, the market the commission claims will be impaired doesn’t exist. There is no MCED test market for the simple reason that there are no commercializable MCED tests. If allowed to proceed, the Illumina/Grail merger may create such a market by facilitating the approval and deployment of the first MCED test. At present, however, there is no such market, and the chances of one ever emerging will be diminished if the FTC succeeds in blocking the recombination of Illumina and Grail.

Because there is no existing market for MCED tests, the FTC’s claim that a combined Illumina/Grail would have a motivation to injure MCED rivals—potential consumers of Illumina’s NGS platforms—is rank speculation. The commission has no idea what profits Illumina would earn from NGS platform sales related to MCED tests, what profits Grail would earn on its own MCED tests, and how the total profits of the combined company would be affected by impairing opportunities for rival MCED test producers.

In the only relevant market that does exist—the cancer-detection market—there can be no question about the competitive effect of an Illumina/Grail merger: It would enhance competition by speeding the creation of a far superior offering that promises to save lives and substantially reduce health-care costs. 

There is yet another problem with the FTC’s theory of anticompetitive harm. The commission’s concern that a recombined Illumina/Grail would foreclose Grail’s rivals from essential NGS platforms and needed technical assistance is obviated by Illumina’s commitments. Specifically, Illumina has irrevocably offered current and prospective oncology customers 12-year contract terms that would guarantee them the same access to Illumina’s sequencing products that they now enjoy, with no price increase. Indeed, the offered terms obligate Illumina not only to refrain from raising prices but also to lower them by at least 43% by 2025 and to provide regulatory and technical assistance requested by Grail’s potential rivals. Illumina’s continued compliance with its firm offer will be subject to regular audits by an independent auditor.

In the end, then, the FTC’s challenge to the Illumina/Grail merger is unjustified. The initial separation of Grail from Illumina encouraged the managerial focus and capital accumulation needed for successful test development. Recombining the two firms will now expedite and lower the costs of the regulatory approval and commercialization processes, permitting Grail’s tests to be widely used, which will enhance their quality. Bringing Grail’s tests and Illumina’s NGS platforms within a single company will also benefit consumers by eliminating double marginalization. Any foreclosure concerns are entirely speculative and are obviated by Illumina’s contractual commitments.

In light of all these considerations, one wonders why the FTC challenged this merger (and on a 4-0 vote) in the first place. Perhaps it was the populist forces from left and right that are pressuring the commission to generally be more aggressive in policing mergers. Some members of the commission may also worry, legitimately, that if they don’t act aggressively on a vertical merger, Congress will amend the antitrust laws in a deleterious fashion. But the commission has picked a poor target. This particular merger promises tremendous benefit and threatens little harm. The FTC should drop its challenge and encourage its European counterparts to do the same. 


[1] If you don’t have time for Carreyrou’s book (and you should make time if you can), HBO’s Theranos documentary is pretty solid.

[2] This ability is market power.  In a perfectly competitive market, any firm that charges an above-cost price will lose sales to rivals, who will vie for business by lowering their prices down to the level of their cost.

[3] Under the model, this is the price that emerges at the output level where the producer’s marginal revenue equals its marginal cost.

The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).

Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.

This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.

There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.

What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.

This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.

All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.

As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.

As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.

The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.

When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.

However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.

There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.

Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.

The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.

Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.