Regrettably, but not unexpectedly, the Federal Trade Commission (FTC) yesterday threw out a reasoned decision by its administrative law judge and ordered DNA-sequencing provider Illumina Inc. to divest GRAIL Inc., makers of a multi-cancer early detection (MCED) test.
The FTC claims that this vertical merger would stifle competition and innovation in the U.S. market for life-saving cancer tests. The FTC’s decision ignores Illumina’s ability to use its resources to obtain regulatory clearances and bring GRAIL’s test to market more quickly, thereby saving many future lives. Other benefits of the transaction, including the elimination of double marginalization, have been succinctly summarized by Thom Lambert. See also the outstanding critique of the FTC’s case by Bruce Kobayashi, Jessica Melugin, Kent Lassman, and Timothy Muris, and this update by Dan Gilman.
The transaction’s potential boon to consumers and patients has, alas, been sacrificed at the altar of theoretical future harms in a not-yet-existing MCED market, and ignores Illumina’s proffered safeguards (embodied in contractual assurances) that it would make its platform available to third parties in a neutral fashion.
The FTC’s holding comes in tandem with a previous European Commission holding to prohibit Illumina’s acquisition of GRAIL and impose a large fine. These two decisions epitomize antitrust enforcement policy at its worst: the sacrifice of clear and substantial near-term welfare benefits to consumers (including lives saved!) based on highly questionable future harms that cannot be reasonably calibrated at this time. A federal appeals court should quickly and decisively overturn this problematic FTC holding, and a European tribunal should act in similar fashion.
The courts cannot, of course, undo the harm flowing from delays in moving GRAIL’s technology forward. This is a sad day for believers in economically sound, evidence-based antitrust enforcement, as well as for patients and consumers.
A White House administration typically announces major new antitrust initiatives in the fall and spring, and this year is no exception. Senior Biden administration officials kicked off the fall season at Fordham Law School (more on that below) by shedding additional light on their plans to expand the accepted scope of antitrust enforcement.
Their aggressive enforcement statements draw headlines, but will the administration’s neo-Brandeisians actually notch enforcement successes? The prospects are cloudy, to say the least.
The U.S. Justice Department (DOJ) has lost some cartel cases in court this year (what was the last time that happened?) and, on Sept. 19, a federal judge rejected the DOJ’s attempt to enjoin United Health’s $13.8 billion bid for Change Healthcare. The Federal Trade Commission (FTC) recently lost two merger challenges before its in-house administrative law judge. It now faces a challenge to its administrative-enforcement processes before the U.S. Supreme Court (the Axon case, to be argued in November).
(Incidentally, on the other side of the Atlantic, the European Commission has faced some obstacles itself. Despite its recent Google victory, the Commission has effectively lost two abuse of dominance cases this year—the Intel and Qualcomm matters—before the European General Court.)
So, are the U.S. antitrust agencies chastened? Will they now go back to basics? Far from it. They enthusiastically are announcing plans to charge ahead, asserting theories of antitrust violations that have not been taken seriously for decades, if ever. Whether this turns out to be wise enforcement policy remains to be seen, but color me highly skeptical. Let’s take a quick look at some of the big enforcement-policy ideas that are being floated.
Fordham Law’s Antitrust Conference
Admiral David Farragut’s order “Damn the torpedoes, full speed ahead!” was key to the Union Navy’s August 1864 victory in the Battle of Mobile Bay, a decisive Civil War clash. Perhaps inspired by this display of risk-taking, the heads of the two federal antitrust agencies—DOJ Assistant Attorney General (AAG) Jonathan Kanter and FTC Chair Lina Khan—took a “damn the economics, full speed ahead” attitude in remarks at the Sept. 16 session of Fordham Law School’s 49th Annual Conference on International Antitrust Law and Policy. Special Assistant to the President Tim Wu was also on hand and emphasized the “all of government” approach to competition policy adopted by the Biden administration.
In his remarks, AAG Kanter seemed to be endorsing a “monopoly broth” argument in decrying the current “Whac-a-Mole” approach to monopolization cases. The intent may be to lessen the burden of proof of anticompetitive effects, or to bring together a string of actions taken jointly as evidence of a Section 2 violation. In taking such an approach, however, there is a serious risk that efficiency-seeking actions may be mistaken for exclusionary tactics and incorrectly included in the broth. (Notably, the U.S. Court of Appeals for the D.C. Circuit’s 2001 Microsoft opinion avoided the monopoly-broth problem by separately discussing specific company actions and weighing them on their individual merits, not as part of a general course of conduct.)
Kanter also recommended going beyond “our horizontal and vertical framework” in merger assessments, despite the fact that vertical mergers (involving complements) are far less likely to be anticompetitive than horizontal mergers (involving substitutes).
Finally, and perhaps most problematically, Kanter endorsed the American Innovative and Choice Online Act (AICOA), citing the protection it would afford “would-be competitors” (but what about consumers?). In so doing, the AAG ignored the fact that AICOA would prohibit welfare-enhancing business conduct and could be harmfully construed to ban mere harm to rivals (see, for example, Stanford professor Doug Melamed’s trenchant critique).
Chair Khan’s presentation, which called for a far-reaching “course correction” in U.S. antitrust, was even more bold and alarming. She announced plans for a new FTC Act Section 5 “unfair methods of competition” (UMC) policy statement centered on bringing “standalone” cases not reachable under the antitrust laws. Such cases would not consider any potential efficiencies and would not be subject to the rule of reason. Endorsing that approach amounts to an admission that economic analysis will not play a serious role in future FTC UMC assessments (a posture that likely will cause FTC filings to be viewed skeptically by federal judges).
In noting the imminent release of new joint DOJ-FTC merger guidelines, Khan implied that they would be animated by an anti-merger philosophy. She cited “[l]awmakers’ skepticism of mergers” and congressional rejection “of economic debits and credits” in merger law. Khan thus asserted that prior agency merger guidance had departed from the law. I doubt, however, that many courts will be swayed by this “economics free” anti-merger revisionism.
Tim Wu’s remarks closing the Fordham conference had a “big picture” orientation. In an interview with GW Law’s Bill Kovacic, Wu briefly described the Biden administration’s “whole of government” approach, embodied in President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy. While the order’s notion of breaking down existing barriers to competition across the American economy is eminently sound, many of those barriers are caused by government restrictions (not business practices) that are not even alluded to in the order.
Moreover, in many respects, the order seeks to reregulate industries, misdiagnosing many phenomena as business abuses that actually represent efficient free-market practices (as explained by Howard Beales and Mark Jamison in a Sept. 12 Mercatus Center webinar that I moderated). In reality, the order may prove to be on net harmful, rather than beneficial, to competition.
Conclusion
What is one to make of the enforcement officials’ bold interventionist screeds? What seems to be missing in their presentations is a dose of humility and pragmatism, as well as appreciation for consumer welfare (scarcely mentioned in the agency heads’ presentations). It is beyond strange to see agencies that are having problems winning cases under conventional legal theories floating novel far-reaching initiatives that lack a sound economics foundation.
It is also amazing to observe the downplaying of consumer welfare by agency heads, given that, since 1979 (in Reiter v. Sonotone), the U.S. Supreme Court has described antitrust as a “consumer welfare prescription.” Unless there is fundamental change in the makeup of the federal judiciary (and, in particular, the Supreme Court) in the very near future, the new unconventional theories are likely to fail—and fail badly—when tested in court.
Bringing new sorts of cases to test enforcement boundaries is, of course, an entirely defensible role for U.S. antitrust leadership. But can the same thing be said for bringing “non-boundary” cases based on theories that would have been deemed far beyond the pale by both Republican and Democratic officials just a few years ago? Buckle up: it looks as if we are going to find out.
Depending on whom you ask, complexity theory is everything from a revolutionary paradigm to a lazy buzzword. What would it mean to apply it in the context of antitrust and would it, in fact, be useful?
Given its numerous applications, scholars have proposed several definitions of complexity theory, invoking different kinds of complexity. According to one, complexity theory is concerned with the study of complex adaptive systems (CAS)—that is, networks that consist of many diverse, interdependent parts. A CAS may adapt and change, for example, in response to past experience.
That does not sound too strange as a general description either of the economy as a whole or of markets in particular, with consumers, firms, and potential entrants among the numerous moving parts. At the same time, this approach contrasts with orthodox economic theory—specifically, with the game-theory models that rule antitrust debates and that prize simplicity and reductionism.
As both a competition economist and a history buff, my primary point of reference for complexity theory is a scholarly debate among Bronze Age scholars. Sound obscure? Bear with me.
The collapse of several flourishing Mediterranean civilizations in the 12th century B.C. (Mycenae and Egypt, to name only two) puzzles historians as much as today’s economists are stumped by the question of whether any particular merger will raise prices.[1] Both questions encounter difficulties in gathering sufficient data for empirical analysis (the lack of counterfactuals and foresight in one case, and 3,000 years of decay in the other), forcing a recourse to theory and possibility results.
Earlier Bronze Age scholarship blamed the “Sea Peoples,” invaders of unknown origin (possibly Sicily or Sardinia), for the destruction of several thriving cities and states. The primary source for this thesis was statements attributed to the Egyptian pharaoh of the time. More recent research, while acknowledging the role of the Sea Peoples, but has gone to lengths to point out that, in many cases, we simply don’t know. Alternative explanations (famine, disease, systems collapse) are individually unconvincing as alternative explanations, but might each have contributed to the end of various Bronze Age civilizations.
Complexity theory was brought into this discussion with some caution. While acknowledging the theory’s potential usefulness, Eric Cline writes:
We may just be applying a scientific (or possibly pseudoscientific) term to a situation in which there is insufficient knowledge to draw firm conclusions. It sounds nice, but does it really advance our understanding? Is it more than just a fancy way to state a fairly obvious fact?
In a review of Cline’s book, archaeologist Guy D. Middleton agreed that the application of complexity theory might be “useful” but also “obvious.” Similarly, in the context of antitrust, I think complexity theory may serve as a useful framework to understand uncertainty in the marketplace.
Thinking of a market as a CAS can help to illustrate the uncertainty behind every decision. For example, a formal economic model with a clear (at least, to economists) equilibrium outcome might predict that a certain merger will give firms the incentive and ability to reduce spending on research and development. But the lens of complexity theory allows us to better understand why we might still be wrong, or why we are right, but for the wrong reasons.
We can accept that decisions that are relevant and observable to antitrust practitioners (such as price and production decisions) can be driven by things that are small and unobservable. For example, a manager who ultimately calls the shots on R&D budgets for an airplane manufacturer might go to a trade fair and become fascinated by a cool robot that a particular shipyard presented. This might have been the key push that prompted her to finance an unlikely robotics project proposed by her head engineer.
Her firm is, indeed, part of a complex system—one that includes the individual purchase decisions of consumers, customer feedback, reports from salespeople in the field, news from science and business journalists about the next big thing, and impressions at trade fairs and exhibitions. These all coalesce in the manager’s head and influence simple decisions about her R&D budget. But I have yet to see a merger-review decision that predicted effects on innovation from peeking into managers’ minds in such a way.
This little story might be a far-fetched example of the Butterfly Effect, perhaps the most familiar concept from complexity theory. Just as the flaps of a butterfly’s wings might cause a storm on the other side of the world, the shipyard’s earlier decision to invest in a robotic manufacturing technology resulted in our fictitious aircraft manufacturer’s decision to invest more in R&D than we might have predicted with our traditional tools.
Indeed, it is easy to think of other small events that can have consequences leading to price changes that are relevant in the antitrust arena. Remember the cargo ship Ever Given, which blocked the Suez Canal in March 2021? One reason mentioned for its distress were unusually strong winds (whether a butterfly was to blame, I don’t know) pushing the highly stacked containers like a sail. The disruption to supply chains was felt in various markets across Europe.
In my opinion, one benefit of admitting this complexity is that it can make ex post evaluation more common in antitrust. Indeed, some researchers are doing great work on this. Enforcers are understandably hesitant to admit that they might get it wrong sometimes, but I believe that we can acknowledge that we will not ultimately know whether merged firms will, say, invest more or less in innovation. Complexity theory tells us that, even if our best and most appropriate model is wrong, the world is not random. It is just very hard to understand and hinges on things that are neither straightforward to observe, nor easy to correctly gauge ex ante.
Turning back to the Bronze Age, scholars have an easier time observing that a certain city was destroyed and abandoned at some point in time than they do in correctly naming the culprit (the Sea Peoples, a rival power, an earthquake?) The appeal of complexity theory is not just that it lifts a scholar’s burden to name one or a few predominant explanations, but that it grants confidence that the decision itself arose out of a complex system: the big and small effects that factors such as famine, trade, weather, and fortune may have had on the city’s ability to defend itself against attack, and the individual-but-interrelated decisions of a city’s citizens to stay or leave following a catastrophe.
Similarly, for antitrust experts, it is easier to observe a price increase following a merger than to correctly guess its reason. Where economists differ from archaeologists and classicists is that they don’t just study the past. They have to continue exploring the present and future. Imagine that an agency clears a merger that we would have expected not to harm competition, but it turns out, ex post, that it was a bad call. Complexity theory doesn’t just offer excuses for where reality diverged from our prediction. Instead, it can tell us whether our tools were deficient or whether we made an “honest mistake.” As investigations are always costly, it is up to the enforcer (or those setting their budget) to decide whether it makes sense to expand investigations to account for new, complex phenomena (reading the minds of R&D managers will probably remain out of the budget for the foreseeable future).
Finally, economists working on antitrust problems should not see this as belittling their role, but as a welcome frame for their work. Computing diversion ratios or modeling a complex market as a straightforward set of equations might still be the best we can do. A model that is right on average gets us closer to the right answer and is certainly preferred to having no clue what’s going on. Where we don’t have precedent to guide us, we have to resort to models that may be wrong, despite getting everything right that was under our control.
A few things that Petit and Schrepel call for are comfortably established in the economist’s toolkit. They might not, however, always be put to use where they should. Notably, there are feedback loops in dynamic models. Even in static models, it is possible to show how a change in one variable has direct and indirect (second order) effects on an outcome. The typical merger investigation is concerned with short-term effects, perhaps those materializing over the three to five years following a merger. These short-term effects may be relatively easy to approximate in a simple model. Granted, Petit and Schrepel’s article adopts a wide understanding of antitrust—including pro-competitive market regulation—but this seems like an important caveat, nonetheless.
In conclusion, complexity theory is something economists and lawyers who study markets should learn more about. It’s a fascinating research paradigm and a framework in which one can make sense of small and large causes having sometimes unpredictable effects. For antitrust practitioners, it can advance our understanding of why our predictions can fail when the tools and approaches that we use are limited. My hope is that understanding complexity will increase openness to ex-post valuation and the expectations toward antitrust enforcement (and its limits). At the same time, it is still an (economic) question of costs and benefits as to whether further complications in an antitrust investigation are worth it.
[1] A fascinating introduction that balances approachability and source work is YouTube’s Extra History series on the Bronze Age collapse.
U.S. antitrust policy seeks to promote vigorous marketplace competition in order to enhance consumer welfare. For more than four decades, mainstream antitrust enforcers have taken their cue from the U.S. Supreme Court’s statement in Reiter v. Sonotone (1979) that antitrust is “a consumer welfare prescription.” Recent suggestions (see here and here) by new Biden administration Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) leadership that antitrust should promote goals apart from consumer welfare have yet to be embodied in actual agency actions, and they have not been tested by the courts. (Given Supreme Court case law, judicial abandonment of the consumer welfare standard appears unlikely, unless new legislation that displaces it is enacted.)
Assuming that the consumer welfare paradigm retains its primacy in U.S. antitrust, how do the goals of antitrust match up with those of national security? Consistent with federal government pronouncements, the “basic objective of U.S. national security policy is to preserve and enhance the security of the United States and its fundamental values and institutions.” Properly applied, antitrust can retain its consumer welfare focus in a manner consistent with national security interests. Indeed, sound antitrust and national-security policies generally go hand-in-hand. The FTC and the DOJ should keep that in mind in formulating their antitrust policies (spoiler alert: they sometimes have failed to do so).
Discussion
At first blush, it would seem odd that enlightened consumer-welfare-oriented antitrust enforcement and national-security policy would be in tension. After all, enlightened antitrust enforcement is concerned with targeting transactions that harmfully reduce output and undermine innovation, such as hard-core collusion and courses of conduct that inefficiently exclude competition and weaken marketplace competition. U.S. national security would seem to be promoted (or, at least, not harmed) by antitrust enforcement directed at supporting stronger, more vibrant American markets.
This initial instinct is correct, if antitrust-enforcement policy indeed reflects economically sound, consumer-welfare-centric principles. But are there examples where antitrust enforcement falls short and thereby is at odds with national security? An evaluation of three areas of interaction between the two American policy interests is instructive.
The degree of congruence between national security and appropriate consumer welfare-enhancing antitrust enforcement is illustrated by a brief discussion of:
defense-industry mergers;
the intellectual property-antitrust interface, with a focus on patent licensing; and
proposed federal antitrust legislation.
The first topic presents an example of clear consistency between consumer-welfare-centric antitrust and national defense. In contrast, the second topic demonstrates that antitrust prosecutions (and policies) that inappropriately weaken intellectual-property protections are inconsistent with national defense interests. The second topic does not manifest a tension between antitrust and national security; rather, it illustrates a tension between national security and unsound antitrust enforcement. In a related vein, the third topic demonstrates how a change in the antitrust statutes that would undermine the consumer welfare paradigm would also threaten U.S. national security.
Defense-Industry Mergers
The consistency between antitrust goals and national security is relatively strong and straightforward in the field of defense-industry-related mergers and joint ventures. The FTC and DOJ traditionally have worked closely with the U.S. Defense Department (DOD) to promote competition and consumer welfare in evaluating business transactions that affect national defense needs.
The DOD has long supported policies to prevent overreliance on a single supplier for critical industrial-defense needs. Such a posture is consistent with the antitrust goal of preventing mergers to monopoly that reduce competition, raise prices, and diminish quality by creating or entrenching a dominant firm. As then-FTC Commissioner William Kovacic commented about an FTC settlement that permitted the United Launch Alliance (an American spacecraft launch service provider established in 2006 as a joint venture between Lockheed Martin and Boeing), “[i]n reviewing defense industry mergers, competition authorities and the DOD generally should apply a presumption that favors the maintenance of at least two suppliers for every weapon system or subsystem.”
Antitrust enforcers have, however, worked with DOD to allow the only two remaining suppliers of a defense-related product or service to combine their operations, subject to appropriate safeguards, when presented with scale economy and quality rationales that advanced national-security interests (see here).
Antitrust enforcers have also consulted and found common cause with DOD in opposing anticompetitive mergers that have national-security overtones. For example, antitrust enforcement actions targeting vertical defense-sector mergers that threaten anticompetitive input foreclosure or facilitate anticompetitive information exchanges are in line with the national-security goal of preserving vibrant markets that offer the federal government competitive, high-quality, innovative, and reasonably priced purchase options for its defense needs.
The FTC’s recent success in convincing Lockheed Martin to drop its proposed acquisition of Aerojet Rocketdyne holdings fits into this category. (I express no view on the merits of this matter; I merely cite it as an example of FTC-DOD cooperation in considering a merger challenge.) In its February 2022 press release announcing the abandonment of this merger, the FTC stated that “[t]he acquisition would have eliminated the country’s last independent supplier of key missile propulsion inputs and given Lockheed the ability to cut off its competitors’ access to these critical components.” The FTC also emphasized the full consistency between its enforcement action and national-security interests:
Simply put, the deal would have resulted in higher prices and diminished quality and innovation for programs that are critical to national security. The FTC’s enforcement action in this matter dovetails with the DoD report released this week recommending stronger merger oversight of the highly concentrated defense industrial base.
Intellectual-Property Licensing
Shifts in government IP-antitrust patent-licensing policy perspectives
Intellectual-property (IP) licensing, particularly involving patents, is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumers’ and producers’ surplus). See generally, for example, Daniel Spulber’s The Case for Patents and Jonathan Barnett’s Innovation, Firms, and Markets. Patents are a property right, and they do not necessarily convey market power, as the federal government has recognized (see 2017 DOJ-FTC Antitrust Guidelines for the Licensing of Intellectual Property).
Standard setting through standard setting organizations (SSOs) has been a particularly important means of spawning valuable benchmarks (standards) that have enabled new patent-backed technologies to drive innovation and enable mass distribution of new high-tech products, such as smartphones. The licensing of patents that cover and make possible valuable standards—“standard-essential patents” or SEPs—has played a crucial role in bringing to market these products and encouraging follow-on innovations that have driven fast-paced welfare-enhancing product and process quality improvements.
Licensing, cross-licensing, or otherwise transferring intellectual property (hereinafter “licensing”) can facilitate integration of the licensed property with complementary factors of production. This integration can lead to more efficient exploitation of the intellectual property, benefiting consumers through the reduction of costs and the introduction of new products. Such arrangements increase the value of intellectual property to consumers and owners. Licensing can allow an innovator to capture returns from its investment in making and developing an invention through royalty payments from those that practice its invention, thus providing an incentive to invest in innovative efforts. …
[L]imitations on intellectual property licenses may serve procompetitive ends by allowing the licensor to exploit its property as efficiently and effectively as possible. These various forms of exclusivity can be used to give a licensee an incentive to invest in the commercialization and distribution of products embodying the licensed intellectual property and to develop additional applications for the licensed property. The restrictions may do so, for example, by protecting the licensee against free riding on the licensee’s investments by other licensees or by the licensor. They may also increase the licensor’s incentive to license, for example, by protecting the licensor from competition in the licensor’s own technology in a market niche that it prefers to keep to itself.
Unfortunately, however, FTC and DOJ antitrust policies over the last 15 years have too often belied this generally favorable view of licensing practices with respect to SEPs. (See generally here, here, and here). Notably, the antitrust agencies have at various times taken policy postures and enforcement actions indicating that SEP holders may face antitrust challenges if:
they fail to license all comers, including competitors, on fair, reasonable, and nondiscriminatory (FRAND) terms; and
seek to obtain injunctions against infringers.
In addition, antitrust policy officials (see 2011 FTC Report) have described FRAND price terms as cabined by the difference between the licensing rates for the first (included in the standard) and second (not included in the standard) best competing patented technologies available prior to the adoption of a standard. This pricing measure—based on the “incremental difference” between first and second-best technologies—has been described as necessary to prevent SEP holders from deriving artificial “monopoly rents” that reflect the market power conferred by a standard. (But see then FTC-Commissioner Joshua Wright’s 2013 essay to the contrary, based on the economics of incomplete contracts.)
This approach to SEPs undervalues them, harming the economy. Limitations on seeking injunctions (which are a classic property-right remedy) encourages opportunistic patent infringements and artificially disfavors SEP holders in bargaining over licensing terms with technology implementers, thereby reducing the value of SEPs. SEP holders are further disadvantaged by the presumption that they must license all comers. They also are harmed by the implication that they must be limited to a relatively low hypothetical “ex ante” licensing rate—a rate that totally fails to take into account the substantial economic welfare value that will accrue to the economy due to their contribution to the standard. Considered individually and as a whole, these negative factors discourage innovators from participating in standardization, to the detriment of standards quality. Lower-quality standards translate into inferior standardized produces and processes and reduced innovation.
Recognizing this problem, in 2018 DOJ, Assistant Attorney General for Antitrust Makan Delrahim announced a “New Madison Approach” (NMA) to SEP licensing, which recognized:
antitrust remedies are inappropriate for patent-licensing disputes between SEP-holders and implementers of a standard;
SSOs should not allow collective actions by standard-implementers to disfavor patent holders;
SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents by seeking injunctions; and
unilateral and unconditional decisions not to license a patent should be per se legal. (See, for example, here and here.)
Acceptance of the NMA would have counter-acted the economically harmful degradation of SEPs stemming from prior government policies.
Regrettably, antitrust-enforcement-agency statements during the last year effectively have rejected the NMA. Most recently, in December 2021, the DOJ issued for public comment a Draft Policy Statement on Licensing Negotiations and Remedies, SEPs, which displaces a 2019 statement that had been in line with the NMA. Unless the FTC and Biden DOJ rethink their new position and decide instead to support the NMA, the anti-innovation approach to SEPs will once again prevail, with unfortunate consequences for American innovation.
The “weaker patents” implications of the draft policy statement would also prove detrimental to national security, as explained in a comment on the statement by a group of leading law, economics, and business scholars (including Nobel Laureate Vernon Smith) convened by the International Center for Law & Economics:
China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights. …
Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.
A Center for Security and International Studies submission on the draft policy statement (signed by a former deputy secretary of the DOD, as well as former directors of the U.S. Patent and Trademark Office and the National Institute of Standards and Technology) also raised China-related national-security concerns:
[T]he largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.
Furthermore, in a more general vein, leading innovation economist David Teece also noted the negative national-security implications in his submission on the draft policy statement:
The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation. … Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.
That’s not all. In its public comment warning against precipitous finalization of the draft policy statement, the Innovation Alliance noted that, in recent years, major foreign jurisdictions have rejected the notion that SEP holders should be deprived the opportunity to seek injunctions. The Innovation Alliance opined in detail on the China national-security issues (footnotes omitted):
[T]he proposed shift in policy will undermine the confidence and clarity necessary to incentivize investments in important and risky research and development while simultaneously giving foreign competitors who do not rely on patents to drive investment in key technologies, like China, a distinct advantage. …
The draft policy statement … would devalue SEPs, and undermine the ability of U.S. firms to invest in the research and development needed to maintain global leadership in 5G and other critical technologies.
Without robust American investments, China—which has clear aspirations to control and lead in critical standards and technologies that are essential to our national security—will be left without any competition. Since 2015, President Xi has declared “whoever controls the standards controls the world.” China has rolled out the “China Standards 2035” plan and has outspent the United States by approximately $24 billion in wireless communications infrastructure, while China’s five-year economic plan calls for $400 billion in 5G-related investment.
Simply put, the draft policy statement will give an edge to China in the standards race because, without injunctions, American companies will lose the incentive to invest in the research and development needed to lead in standards setting. Chinese companies, on the other hand, will continue to race forward, funded primarily not by license fees, but by the focused investment of the Chinese government. …
Public hearings are necessary to take into full account the uncertainty of issuing yet another policy on this subject in such a short time period.
A key part of those hearings and further discussions must be the national security implications of a further shift in patent enforceability policy. Our future safety depends on continued U.S. leadership in areas like 5G and artificial intelligence. Policies that undermine the enforceability of patent rights disincentivize the substantial private sector investment necessary for research and development in these areas. Without that investment, development of these key technologies will begin elsewhere—likely China. Before any policy is accepted, key national-security stakeholders in the U.S. government should be asked for their official input.
These are not the only comments that raised the negative national-security ramifications of the draft policy statement (see here and here). For example, current Republican and Democratic senators, prior International Trade Commissioners, and former top DOJ and FTC officials also noted concerns. What’s more, the Patent Protection Society of China, which represents leading Chinese corporate implementers, filed a rather nonanalytic submission in favor of the draft statement. As one leading patent-licensing lawyer explains: “UC Berkley Law Professor Mark Cohen, whose distinguished government service includes serving as the USPTO representative in China, submitted a thoughtful comment explaining how the draft Policy Statement plays into China’s industrial and strategic interests.”
Finally, by weakening patent protection, the draft policy statement is at odds with the 2021 National Security Commission on Artificial Intelligence Report, which called for the United States to “[d]evelop and implement national IP policies to incentivize, expand, and protect emerging technologies[,]” in response to Chinese “leveraging and exploiting intellectual property (IP) policies as a critical tool within its national strategies for emerging technologies.”
In sum, adoption of the draft policy statement would raise antitrust risks, weaken key property rights protections for SEPs, and undercut U.S. technological innovation efforts vis-à-vis China, thereby undermining U.S. national security.
FTC v. Qualcomm: Misguided enforcement and national security
U.S. national-security interests have been threatened by more than just the recent SEP policy pronouncements. In filing a January 2017 antitrust suit (at the very end of the Obama administration) against Qualcomm’s patent-licensing practices, the FTC (by a partisan 2-1 vote) ignored the economic efficiencies that underpinned this highly successful American technology company’s practices. Had the suit succeeded, U.S. innovation in a critically important technology area would have needlessly suffered, with China as a major beneficiary. A recent Federalist Society Regulatory Transparency Project report on the New Madison Approach underscored the broad policy implications of FTC V. Qualcomm (citations deleted):
The FTC’s Qualcomm complaint reflected the anti-SEP bias present during the Obama administration. If it had been successful, the FTC’s prosecution would have seriously undermined the freedom of the company to engage in efficient licensing of its SEPs.
Qualcomm is perhaps the world’s leading wireless technology innovator. It has developed, patented, and licensed key technologies that power smartphones and other wireless devices, and continues to do so. Many of Qualcomm’s key patents are SEPs subject to FRAND, directed to communications standards adopted by wireless devices makers. Qualcomm also makes computer processors and chips embodied in cutting edge wireless devices. Thanks in large part to Qualcomm technology, those devices have improved dramatically over the last decade, offering consumers a vast array of new services at a lower and lower price, when quality is factored in. Qualcomm thus is the epitome of a high tech American success story that has greatly benefited consumers.
Qualcomm: (1) sells its chips to “downstream” original equipment manufacturers (OEMs, such as Samsung and Apple), on the condition that the OEMs obtain licenses to Qualcomm SEPs; and (2) refuses to license its FRAND-encumbered SEPs to rival chip makers, while allowing those rivals to create and sell chips embodying Qualcomm SEP technologies to those OEMS that have entered a licensing agreement with Qualcomm.
The FTC’s 2017 antitrust complaint, filed in federal district court in San Francisco, charged that Qualcomm’s “no license, no chips” policy allegedly “forced” OEM cell phone manufacturers to pay elevated royalties on products that use a competitor’s baseband processors. The FTC deemed this an illegal “anticompetitive tax” on the use of rivals’ processors, since phone manufacturers “could not run the risk” of declining licenses and thus losing all access to Qualcomm’s processors (which would be needed to sell phones on important cellular networks). The FTC also argued that Qualcomm’s refusal to license its rivals despite its SEP FRAND commitment violated the antitrust laws. Finally, the FTC asserted that a 2011-2016 Qualcomm exclusive dealing contract with Apple (in exchange for reduced patent royalties) had excluded business opportunities for Qualcomm competitors.
The federal district court held for the FTC. It ordered that Qualcomm end these supposedly anticompetitive practices and renegotiate its many contracts. [Among the beneficiaries of new pro-implementer contract terms would have been a leading Chinese licensee of Qualcomm’s, Huawei, the huge Chinese telecommunications company that has been accused by the U.S. government of using technological “back doors” to spy on the United States.]
Qualcomm appealed, and in August 2020 a panel of the Ninth Circuit Court of Appeals reversed the district court, holding for Qualcomm. Some of the key points underlying this holding were: (1) Qualcomm had no antitrust duty to deal with competitors, consistent with established Supreme Court precedent (a very narrow exception to this precedent did not apply); (2) Qualcomm’s rates were chip supplier neutral because all OEMs paid royalties, not just rivals’ customers; (3) the lower court failed to show how the “no license, no chips” policy harmed Qualcomm’s competitors; and (4) Qualcomm’s agreements with Apple did not have the effect of substantially foreclosing the market to competitors. The Ninth Circuit as a whole rejected the FTC’s “en banc” appeal for review of the panel decision.
The appellate decision in Qualcomm largely supports pillar four of the NMA, that unilateral and unconditional decisions not to license a patent should be deemed legal under the antitrust laws. More generally, the decision evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support. The FTC and the lower court’s findings of “harm” had been essentially speculative and anecdotal at best. They had ignored the “big picture” that the markets in which Qualcomm operates had seen vigorous competition and the conferral of enormous and growing welfare benefits on consumers, year-by-year. The lower court and the FTC had also turned a deaf ear to a legitimate efficiency-related business rationale that explained Qualcomm’s “no license, no chips” policy – a fully justifiable desire to obtain a fair return on Qualcomm’s patented technology.
Qualcomm is well reasoned, and in line with sound modern antitrust precedent, but it is only one holding. The extent to which this case’s reasoning proves influential in other courts may in part depend on the policies advanced by DOJ and the FTC going forward. Thus, a preliminary examination of the Biden administration’s emerging patent-antitrust policy is warranted. [Subsequent discussion shows that the Biden administration apparently has rejected pro-consumer policies embodied in the 9th U.S. Circuit’s Qualcomm decision and in the NMA.]
Although the 9th Circuit did not comment on them, national-security-policy concerns weighed powerfully against the FTC v. Qualcomm suit. In a July 2019 Statement of Interest (SOI) filed with the circuit court, DOJ cogently set forth the antitrust flaws in the district court’s decision favoring the FTC. Furthermore, the SOI also explained that “the public interest” favored a stay of the district court holding, due to national-security concerns (described in some detail in statements by the departments of Defense and Energy, appended to the SOI):
[T]he public interest also takes account of national security concerns. Winter v. NRDC, 555 U.S. 7, 23-24 (2008). This case presents such concerns. In the view of the Executive Branch, diminishment of Qualcomm’s competitiveness in 5G innovation and standard-setting would significantly impact U.S. national security. A251-54 (CFIUS); LD ¶¶10-16 (Department of Defense); ED ¶¶9-10 (Department of Energy). Qualcomm is a trusted supplier of mission-critical products and services to the Department of Defense and the Department of Energy. LD ¶¶5-8; ED ¶¶8-9. Accordingly, the Department of Defense “is seriously concerned that any detrimental impact on Qualcomm’s position as global leader would adversely affect its ability to support national security.” LD ¶16.
The [district] court’s remedy [requiring the renegotiation of Qualcomm’s licensing contracts] is intended to deprive, and risks depriving, Qualcomm of substantial licensing revenue that could otherwise fund time-sensitive R&D and that Qualcomm cannot recover later if it prevails. See, e.g., Op. 227-28. To be sure, if Qualcomm ultimately prevails, vacatur of the injunction will limit the severity of Qualcomm’s revenue loss and the consequent impairment of its ability to perform functions critical to national security. The Department of Defense “firmly believes,” however, “that any measure that inappropriately limits Qualcomm’s technological leadership, ability to invest in [R&D], and market competitiveness, even in the short term, could harm national security. The risks to national security include the disruption of [the Department’s] supply chain and unsure U.S. leadership in 5G.” LD ¶3. Consequently, the public interest necessitates a stay pending this Court’s resolution of the merits. In these rare circumstances, the interest in preventing even a risk to national security—“an urgent objective of the highest order”—presents reason enough not to enforce the remedy immediately. Int’l Refugee Assistance Project, 137 S. Ct. at 2088 (internal quotations omitted).
Not all national-security arguments against antitrust enforcement may be well-grounded, of course. The key point is that the interests of national security and consumer-welfare-centric antitrust are fully aligned when antitrust suits would inefficiently undermine the competitive vigor of a firm or firms that play a major role in supporting U.S. national-security interests. Such was the case in FTC v. Qualcomm. More generally, heightened antitrust scrutiny of efficient patent-licensing practices (as threatened by the Biden administration) would tend to diminish innovation by U.S. patentees, particularly in areas covered by standards that are key to leading global technologies. Such a diminution in innovation will tend to weaken American advantages in important industry sectors that are vital to U.S. national-security interests.
Proposed Federal Antitrust Legislation
Proposed federal antitrust legislation being considered by Congress (see here, here, and here for informed critiques) would prescriptively restrict certain large technology companies’ business transactions. If enacted, such legislation would thereby preclude case-specific analysis of potential transaction-specific efficiencies, thereby undermining the consumer welfare standard at the heart of current sound and principled antitrust enforcement. The legislation would also be at odds with our national-security interests, as a recent U.S. Chamber of Commerce paper explains:
Congress is considering new antitrust legislation which, perversely, would weaken leading U.S. technology companies by crafting special purpose regulations under the guise of antitrust to prohibit those firms from engaging in business conduct that is widely acceptable when engaged in by rival competitors.
A series of legislative proposals – some of which already have been approved by relevant Congressional committees – would, among other things: dismantle these companies; prohibit them from engaging in significant new acquisitions or investments; require them to disclose sensitive user data and sensitive IP and trade secrets to competitors, including those that are foreign-owned and controlled; facilitate foreign influence in the United States; and compromise cybersecurity. These bills would fundamentally undermine American security interests while exempting from scrutiny Chinese and other foreign firms that do not meet arbitrary user and market capitalization thresholds specified in the legislation. …
The United States has never used legislation to punish success. In many industries, scale is important and has resulted in significant gains for the American economy, including small businesses. U.S. competition law promotes the interests of consumers, not competitors. It should not be used to pick winners and losers in the market or to manage competitive outcomes to benefit select competitors. Aggressive competition benefits consumers and society, for example by pushing down prices, disrupting existing business models, and introducing innovative products and services.
If enacted, the legislative proposals would drag the United States down in an unfolding global technological competition. Companies captured by the legislation would be required to compete against integrated foreign rivals with one hand tied behind their backs. Those firms that are the strongest drivers of U.S. innovation in AI, quantum computing, and other strategic technologies would be hamstrung or even broken apart, while foreign and state-backed producers of these same technologies would remain unscathed and seize the opportunity to increase market share, both in the U.S. and globally. …
Instead of warping antitrust law to punish a discrete group of American companies, the U.S. government should focus instead on vigorous enforcement of current law and on vocally opposing and effectively countering foreign regimes that deploy competition law and other legal and regulatory methods as industrial policy tools to unfairly target U.S. companies. The U.S. should avoid self-inflicted wounds to our competitiveness and national security that would result from turning antitrust into a weapon against dynamic and successful U.S. firms.
Consistent with this analysis, former Obama administration Defense Secretary Leon Panetta and former Trump administration Director of National Intelligence Dan Coats argued in a letter to U.S. House leadership (see here) that “imposing severe restrictions solely on U.S. giants will pave the way for a tech landscape dominated by China — echoing a position voiced by the Big Tech companies themselves.”
The national-security arguments against current antitrust legislative proposals, like the critiques of the unfounded FTC v. Qualcomm case, represent an alignment between sound antitrust policy and national-security analysis. Unfounded antitrust attacks on efficient business practices by large firms that help maintain U.S. technological leadership in key areas undermine both principled antitrust and national security.
Conclusion
Enlightened antitrust enforcement, centered on consumer welfare, can and should be read in a manner that is harmonious with national-security interests.
The cooperation between U.S. federal antitrust enforcers and the DOD in assessing defense-industry mergers and joint ventures is, generally speaking, an example of successful harmonization. This success reflects the fact that antitrust enforcers carry out their reviews of those transactions with an eye toward accommodating efficiencies that advance defense goals without sacrificing consumer welfare. Close antitrust-agency consultation with DOD is key to that approach.
Unfortunately, federal enforcement directed toward efficient intellectual-property licensing, as manifested in the Qualcomm case, reflects a disharmony between antitrust and national security. This disharmony could be eliminated if DOJ and the FTC adopted a dynamic view of intellectual property and the substantial economic-welfare benefits that flow from restrictive patent-licensing transactions.
In sum, a dynamic analysis reveals that consumer welfare is enhanced, not harmed, by not subjecting such licensing arrangements to antitrust threat. A more permissive approach to licensing is thus consistent with principled antitrust and with the national security interest of protecting and promoting strong American intellectual property (and, in particular, patent) protection. The DOJ and the FTC should keep this in mind and make appropriate changes to their IP-antitrust policies forthwith.
Finally, proposed federal antitrust legislation would bring about statutory changes that would simultaneously displace consumer welfare considerations and undercut national security interests. As such, national security is supported by rejecting unsound legislation, in order to keep in place consumer-welfare-based antitrust enforcement.
The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.
Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.
Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.
The RFI Reflects the False Premise that Competition is Declining in the United States
The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:
Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.
This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:
Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.
Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002.
Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.
In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.
The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship
The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)
What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”
Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).
The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.
For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”
The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.
Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”
Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)
The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”
This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:
The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.
By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).
These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.
New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI
The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.
As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):
[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.
It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.
Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”
Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.
One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.
In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.
New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies
The DOJ and FTC could, of course, acknowledge the problem of administrability and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.
As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.
In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”
New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.
Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.
Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):
When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.
Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.
New Guidelines, if Issued, Should Give Due Credit to Efficiencies
Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.
Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.
Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)
Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.
Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”
She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.
The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.
Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.
Conclusion
If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:
Eschew references to dated and discredited case law;
Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).
Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.
Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.
Background: The Grudging Acceptance of Merger Efficiencies
Not long ago, economically literate antitrust teachers in the United States enjoyed poking fun at such benighted 1960s Supreme Court decisions as Procter & Gamble (following in the wake of Brown ShoeandPhiladelphia National Bank). Those holdings—which not only rejected efficiencies justifications for mergers, but indeed “treated efficiencies more as an offense”—seemed a thing of the past, put to rest by the rise of an economic approach to antitrust. Several early European Commission merger-control decisions also arguably embraced an “efficiencies offense.”
Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”
In its first few years of merger review, which was authorized in 1989, the European Commission was hostile to merger-efficiency arguments. In 2004, however, the EC promulgated horizontal merger guidelines that allow for the consideration of efficiencies, but only if three cumulative conditions (consumer benefit, merger specificity, and verifiability) are satisfied. A leading European competition practitioner has characterized several key European Commission merger decisions in the last decade as giving rather short shrift to efficiencies. In light of that observation, the practitioner has advocated that “the efficiency offence theory should, once again, be repudiated by the Commission, in order to avoid deterring notifying parties from bringing forward perfectly valid efficiency claims.”
In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.
Two Enforcement Matters with ‘Efficiencies Offense’ Overtones
Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)
FTC v. Facebook
The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:
[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.
The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”
UK Competition and Markets Authority (CMA) v. Facebook
The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . . These platforms license the use of Giphy for its users.”
The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.
As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”
Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:
Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.
There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”
Are the Facebook Cases Merely Random Straws in the Wind?
It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.
In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.
Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.
The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.
Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”
Finally, newly confirmed Assistant Attorney General for Antitrust Jonathan Kanter (who is widely known as a Big Tech critic) has expressed his concerns about the consumer welfare standard and the emphasis on economics in antitrust analysis. Such concerns also suggest, at least by implication, that the Antitrust Division under Kanter’s leadership may manifest a heightened skepticism toward efficiencies justifications.
Conclusion
Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.
One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)
While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.
As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.
[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]
Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.
Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed.,strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.
I will highlight five areas of concern with the DMU proposal.
I. Chilling Effects
The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.
Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.
II. SMS Designations Should Not Apply to the Whole Firm
The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.
Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.
Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.
III. The DMU Cannot Be Evidence-based Given its Goals and Objectives
The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.
Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.
Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.
Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.
IV. Speed of Proposals
Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.
As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.
But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.
The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar. Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.
V. Antitrust Is Not Always the Answer
Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.
Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.
Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.
There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.
Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.
In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.
[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.
Large group of people in the shape of two puzzle pieces on a white background.
The Federal Trade Commission (FTC) has taken another step away from case-specific evaluation of proposed mergers and toward an ex ante regulatory approach in its Oct. 25 “Statement of the Commission on Use of Prior Approval Provisions in Merger Orders.” Though not unexpected, this unfortunate initiative once again manifests the current FTC leadership’s disdain for long-accepted economically sound antitrust-enforcement principles.
Discussion
High levels of merger activity should, generally speaking, be viewed as a symptom of a vibrant economy, not a reason for economic concern. Horizontal mergers typically are driven by the potential to realize real cost savings, unrelated to anticompetitive reductions in output.
Non-horizontal mergers often put into force welfare-enhancing reductions of double marginalization, while uniting complements and achieving synergies in ways that seek efficiencies. More generally, proposed acquisitions frequently reflect an active market for corporate control that seeks to reallocate scarce resources to higher-valued uses (see, for example, Henry Manne’s seminal article on “Mergers and the Market for Corporate Control”). Finally, by facilitating cost reductions, synergies, and improvements in resource allocations within firms, mergers may allow the new consolidated entity to compete more effectively in the marketplace, thereby enhancing competition.
Given the economic benefits frequently generated by mergers, government antitrust enforcers should not discourage them, nor should they intervene to block them, absent a strong showing that a particular transaction would likely reduce competition and harm consumer welfare. In the United States, the Hart-Scott-Rodino Premerger Notification Act of 1976 (HSR) and its implementing regulations generally have reflected this understanding. They have done this by requiring that proposed transactions above a certain size threshold be notified to the FTC and the U.S. Justice Department (DOJ), and by providing a framework for timely review, allowing most notified mergers to close promptly.
In the relatively few cases where agency enforcement staff have identified competitive problems, the HSR framework usually has enabled timely negotiation of possible competitive fixes (divestitures and, less typically, behavioral remedies). Where fixes have not been feasible, filing parties generally have been able to decide whether to drop a transaction or prepare for litigation within a reasonable time period. Under the HSR framework, enforcers generally have respected the time sensitivity of merger proposals and acted expeditiously (with a few exceptions) to review complicated and competitively sensitive transactions. The vast majority of HSR filings that facially raise no plausible competitive issues historically have been dealt with swiftly—often through “early termination” policies that provide the merging parties an antitrust go-ahead well before the end of HSR’s initial 30-day review period.
In short, although far from perfect, HSR processes have sought to minimize regulatory impediments to merger activity, consistent with the statutory mandate to identify and prevent anticompetitive mergers.
Regrettably, under the leadership of Chair Lina M. Khan, the FTC has taken unprecedented steps to undermine the well-understood HSR framework. As I wrote recently:
For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:
1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.
Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.
Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.
Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”
The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.
The FTC’s merger-review reign of error continues. Most recently, it released a policy guidance statement that effectively transforms the commission into a merger regulator whose assent is required for a specific category of mergers. This policy is at odds with HSR, which is designed to facilitate merger reviews, not to serve as a regulatory-approval mechanism. As the FTC explains in its Oct. 25 statement(citation to 1995 Statement omitted) (approved by a 3-2 vote, with Commissioners Noah Joshua Phillips and Christine S. Wilson dissenting):
On July 21, 2021, the Commission voted to rescind the 1995 Policy Statement on Prior Approval and Prior Notice Provisions (“1995 Statement”). The 1995 Statement ended the Commission’s then-longstanding practice of incorporating prior approval and prior notice provisions in Commission orders addressing mergers. With the rescission of the 1995 statement, the Commission returns now to its prior practice of routinely requiring merging parties subject to a Commission order to obtain prior approval from the FTC before closing any future transaction affecting each relevant market for which a violation was alleged. . . .
In addition, from now on, in matters where the Commission issues a complaint to block a merger and the parties subsequently abandon the transaction, the agency will engage in a case-specific determination as to whether to pursue a prior approval order, focusing on the factors identified below with respect to use of broader prior approval provisions. The fact that parties may abandon a merger after litigation commences does not guarantee that the
Commission will not subsequently pursue an order incorporating a prior approval provision. . . . In some situations where stronger relief is needed, the Commission may decide to seek a prior approval provision that covers product and geographic markets beyond just the relevant product and geographic markets affected by the merger. No single factor is dispositive; rather, the Commission will take a holistic view of the circumstances when determining the length and breadth of prior approval provisions. [Six factors listed include the nature of the transaction; the level of market concentration; the degree to which the transaction increases concentration; the degree to which one of the parties pre-merger likely had market power; the parties’ history of acquisitiveness; and evidence of anticompetitive market dynamics.]
The Oct. 25 Statement is highly problematic in several respects. Its oversight requirements may discourage highly effective consent decree “fixes” of potential mergers, leading to wasteful litigation—or, alternatively, the abandonment of efficient transactions. What’s more, the threat of FTC prior approval orders (based on multiple criteria subject to manipulation by the FTC), even when parties abandon a proposed transaction (and thus, effectively have “done nothing”), smacks of unwarranted regulation of future corporate plans of disfavored firms, raising questions of fundamental fairness.
All told, the new requirements, combined with the FTC’s policies to end early terminations and to stop “greenlighting” routine merger transactions after a 30-day review, are yet signs that the well-understood HSR consensus has been unilaterally abandoned by the FTC, based on purely partisan commission votes, despite the lack of any public consultation. The FTC’s abrupt and arbitrary merger-review-related actions will harm the economy by discouraging welfare-promoting consolidations. These actions also fly in the face of sound public administration.
Conclusion
The FTC continues to move from its historic role of antitrust enforcer to that of antitrust regulator at warp speed, based on a series of 3-2 votes. In particular, the commission’s abandonment of a well-established bipartisan approach to HSR policy is particularly troublesome, given the new risks it creates for private parties considering acquisitions. These new risks will likely deter an unknown number of efficiency-enhancing, innovative combinations that could have benefited consumers and substantially strengthened the American economy.
Perhaps the imminent confirmation of Jonathan Kanter—an individual with many years of practical experience as a leading antitrust practitioner—to be assistant attorney general for antitrust will bring a more reasonable perspective to antitrust agency HSR policies. It may even convince a majority of the commission to return to the bipartisan HSR merger-review framework that has served the American economy well.
If not, perhaps congressional overseers might wish to investigate the implications for the American innovation economy and the rule of law stemming from the FTC’s de facto abandonment of HSR principles. Whether to fundamentally alter merger-review procedures should be up to Congress, not to three unelected officials.
Federal Trade Commission (FTC) Chair Lina Khan’s Sept. 22 memorandum to FTC commissioners and staff—entitled “Vision and Priorities for the FTC” (VP Memo)—offers valuable insights into the chair’s strategy and policy agenda for the commission. Unfortunately, it lacks an appreciation for the limits of antitrust and consumer-protection law; it also would have benefited from greater regulatory humility. After summarizing the VP Memo’s key sections, I set forth four key takeaways from this rather unusual missive.
Introduction
The VP Memo begins appropriately enough, with praise for commission staff and a call to focus on key FTC strategic priorities and operational objectives. So far, so good. Regrettably, the introductory section is the memo’s strongest feature.
Strategic Approach
The VP Memo’s first substantive section, which lays out Khan’s strategic approach, raises questions that require further clarification.
This section is long on glittering generalities. First, it begins with the need to take a “holistic approach” that recognizes law violations harm workers and independent businesses, as well as consumers. Legal violations that reflect “power asymmetries” and harm to “marginalized communities” are emphasized, but not defined. Are new enforcement standards to supplement or displace consumer welfare enhancement being proposed?
Second, similar ambiguity surrounds the need to target enforcement efforts toward “root causes” of unlawful conduct, rather than “one-off effects.” Root causes are said to involve “structural incentives that enable unlawful conduct” (such as conflicts of interest, business models, or structural dominance), as well as “upstream” examination of firms that profit from such conduct. How these observations may be “operationalized” into case-selection criteria (and why these observations are superior to alternative means for spotting illegal behavior) is left unexplained.
Third, the section endorses a more “rigorous and empiricism-driven approach” to the FTC’s work, a “more interdisciplinary approach” that incorporates “a greater range of analytical tools and skillsets.” This recommendation is not problematic on its face, though it is a bit puzzling. The FTC already relies heavily on economics and empirical work, as well as input from technologists, advertising specialists, and other subject matter experts, as required. What other skillsets are being endorsed? (A more far-reaching application of economic thinking in certain consumer-protection cases would be helpful, but one suspects that is not the point of the paragraph.)
Fourth, the need to be especially attentive to next-generation technologies, innovations, and nascent industries is trumpeted. Fine, but the FTC already does that in its competition and consumer-protection investigations.
Finally, the need to “democratize” the agency is highlighted, to keep the FTC in tune with “the real problems that Americans are facing in their daily lives and using that understanding to inform our work.” This statement seems to imply that the FTC is not adequately dealing with “real problems.” The FTC, however, has not been designated by Congress to be a general-purpose problem solver. Rather, the agency has a specific statutory remit to combat anticompetitive activity and unfair acts or practices that harm consumers. Ironically, under Chair Khan, the FTC has abruptly implemented major changes in key areas (including rulemaking, the withdrawal of guidance, and merger-review practices) without prior public input or consultation among the commissioners (see, for example, here)—actions that could be deemed undemocratic.
Policy Priorities
The memo’s brief discussion of Khan’s policy priorities raises three significant concerns.
First, Khan stresses the “need to address rampant consolidation and the dominance that it has enabled across markets” in the areas of merger enforcement and dominant-firm scrutiny. The claim that competition has substantially diminished has been critiqued by leading economists, and is dubious at best (see, for example, here). This flat assertion is jarring, and in tension with the earlier call for more empirical analysis. Khan’s call for revision of the merger guidelines (presumably both horizontal and vertical), in tandem with the U.S. Justice Department (DOJ), will be headed for trouble if it departs from the economic reasoning that has informed prior revisions of those guidelines. (The memo’s critical and cryptic reference to the “narrow and outdated framework” of recent guidelines provides no clue as to the new guidelines format that Chair Khan might deem acceptable.)
Second, the chair supports prioritizing “dominant intermediaries” and “extractive business models,” while raising concerns about “private equity and other investment vehicles” that “strip productive capacity” and “target marginalized communities.” No explanation is given as to why such prioritization will best utilize the FTC’s scarce resources to root out harmful anticompetitive behavior and consumer-protection harms. By assuming from the outset that certain “unsavory actors” merit prioritization, this discussion also is in tension with an empirical approach that dispassionately examines the facts in determining how resources should best be allocated to maximize the benefits of enforcement.
Third, the chair wants to direct special attention to “one-sided contract provisions” that place “[c]onsumers, workers, franchisees, and other market participants … at a significant disadvantage.” Non-competes, repair restrictions, and exclusionary clauses are mentioned as examples. What is missing is a realistic acknowledgement of the legal complications that would be involved in challenging such provisions, and a recognition of possible welfare benefits that such restraints could generate under many circumstances. In that vein, mere perceived inequalities in bargaining power alluded to in the discussion do not, in and of themselves, constitute antitrust or consumer-protection violations.
Operational Objectives
The closing section, on “operational objectives,” is not particularly troublesome. It supports an “integrated approach” to enforcement and policy tools, and endorses “breaking down silos” between competition (BC) and consumer-protection (BCP) staff. (Of course, while greater coordination between BC and BCP occasionally may be desirable, competition and consumer-protection cases will continue to feature significant subject matter and legal differences.) It also calls for greater diversity in recruitment and a greater staffing emphasis on regional offices. Finally, it endorses bringing in more experts from “outside disciplines” and more rigorous analysis of conduct, remedies, and market studies. These points, although not controversial, do not directly come to grip with questions of optimal resource allocation within the agency, which the FTC will have to address.
Evaluating the VP Memo: 4 Key Takeaways
The VP Memo is a highly aggressive call-to-arms that embodies Chair Khan’s full-blown progressive vision for the FTC. There are four key takeaways:
Promoting the consumer interest, which for decades has been the overarching principle in both FTC antitrust and consumer-protection cases (which address different sources of consumer harm), is passé. Protecting consumers is only referred to in passing. Rather, the concerns of workers, “honest businesses,” and “marginalized communities” are emphasized. Courts will, however, continue to focus on established consumer-welfare and consumer-harm principles in ruling on antitrust and consumer-protection cases. If the FTC hopes to have any success in winning future cases based on novel forms of harm, it will have to ensure that its new case-selection criteria also emphasize behavior that harms consumers.
Despite multiple references to empiricism and analytical rigor, the VP Memo ignores the potential economic-welfare benefits of the categories of behavior it singles out for condemnation. The memo’s critiques of “middlemen,” “gatekeepers,” “extractive business models,” “private equity,” and various types of vertical contracts, reference conduct that frequently promotes efficiency, generating welfare benefits for producers and consumers. Even if FTC lawsuits or regulations directed at these practices fail, the business uncertainty generated by the critiques could well disincentivize efficient forms of conduct that spark innovation and economic growth.
The VP Memo in effect calls for new enforcement initiatives that challenge conduct different in nature from FTC cases brought in recent decades. This implicit support for lawsuits that would go well beyond existing judicial interpretations of the FTC’s competition and consumer-protection authority reflects unwarranted hubris. This April, in the AMG case, the U.S. Supreme Court unanimously rejected the FTC’s argument that it had implicit authority to obtain monetary relief under Section 13(b) of the FTC Act, which authorizes permanent injunctions – despite the fact that several appellate courts had found such authority existed. The Court stated that the FTC could go to Congress if it wanted broader authority. This decision bodes ill for any future FTC efforts to expand its authority into new realms of “unfair” activity through “creative” lawyering.
Chair Khan’s unilateral statement of her policy priorities embodied in the VP Memo bespeaks a lack of humility. It ignores a long history of consensus FTC statements on agency priorities, reflected in numerous commission submissions to congressional committees in connection with oversight hearings. Although commissioners have disagreed on specific policy statements or enforcement complaints, general “big picture” policy statements to congressional overseers typically have been by unanimous vote. By ignoring this tradition, the VP Memo departs from a longstanding bipartisan tradition that will tend to undermine the FTC’s image as a serious deliberative body that seeks to reconcile varying viewpoints (while recognizing that, at times, different positions will be expressed on particular matters). If the FTC acts more and more like a one-person executive agency, why does it need to be “independent,” and, indeed, what special purpose does it serve as a second voice on federal antitrust matters? Under seeming unilateral rule, the prestige of the FTC before federal courts may suffer, undermining its effectiveness in defending enforcement actions and promulgating rules. This will particularly be the case if more and more FTC decisions are taken by a 3-2 vote and appear to reflect little or no consultation with minority commissioners.
Conclusion
The VP Memo reflects a lack of humility and strategic insight. It sets forth priorities that are disconnected from the traditional core of the FTC’s consumer-welfare-centric mission. It emphasizes new sorts of initiatives that are likely to “crash and burn” in the courts, unless they are better anchored to established case law and FTC enforcement principles. As a unilateral missive announcing an unprecedented change in policy direction, the memo also undermines the tradition of collegiality and reasoned debate that generally has characterized the commission’s activities in recent decades.
As such, the memo will undercut, not advance, the effectiveness of FTC advocacy before the courts. It will also undermine the FTC’s reputation as a truly independent deliberative body. Accordingly, one may hope that Chair Khan will rethink her approach, withdraw the VP Memo, and work with all of her fellow commissioners to recraft a new consensus policy document.
In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.
These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).
Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.
However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.
The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.
For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.
Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.
The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.
Kill Zones
One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.
The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.
But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.
Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.
Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:
[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.
However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.
Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.
Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.
In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”
Mergers and Potential Competition
Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.
Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.
However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.
Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.
Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell. Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.
Second, potential competition does not always increase consumer welfare. Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.
For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.
There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.
In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.
Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.
Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.
If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.
This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.
As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.
Killer Acquisitions
Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”
Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:
[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]
[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.
From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.
To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.
Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.
Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?
The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.
Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.
And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:
For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.
One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:
The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.
Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.
In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.
This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.
In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.
While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.
A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.
This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.
The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.
To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.
The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.
Conclusion
Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.
Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.
The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.
Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.
The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.
In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.
Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.
The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).
While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.
In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:
[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.
Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.
The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.
Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.
That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition. As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.
Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.
Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.
As of now, the FTC’s departure from the rule of law has been notable in two areas:
Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
Its new advice rejecting time limits for the review of generally routine proposed mergers.
In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.
Rescission of the Unfair Methods of Competition Policy Statement
The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.
In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.
The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.
In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.
New Guidance to Parties Considering Mergers
For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:
The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.
Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.
Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.
Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”
The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.
[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].
Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.
More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).
Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:
Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]
Proposed FTC Competition Rulemakings
The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:
First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.
Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]
In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.
Conclusion
Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.
Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.
This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges.]
A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.
Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.
But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.
The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.
The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.
Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).
Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.
The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.
Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules? In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.
The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.
Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.
Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?
Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).
There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.
Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.
Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”
Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.
Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.
[1] But some threshold conditions like agreement or dominance might also become dated.