Archives For Administrative

[The following is a guest post from Igor Nikolic, a research fellow at the European University Institute.]

The European Commission is working on a legislative proposal that would regulate the licensing framework for standard-essential patents (SEPs). A regulatory proposal leaked to the press has already been the subject of extensive commentary (see here, here, and here). The proposed regulation apparently will include a complete overhaul of the current SEP-licensing system and will insert a new layer of bureaucracy in this area.

This post seeks to explain how the EU’s current standardization and licensing system works and to provide some preliminary thoughts on the proposed regulation’s potential impacts. As it currently stands, it appears the regulation will significantly increase costs to the most innovative companies that participate in multiple standardization activities. It would, for instance, regulate technology prices, limit the enforcement of patent rights, and introduce new avenues for further delays in SEP-licensing negotiations.

It also might harm the EU’s innovativeness on the global stage and set precedents for other countries to regulate, possibly jeopardizing how the entire international technical-standardization system functions. An open public discussion about the regulation’s contents might provide more time to think about the goals the EU wants to achieve on the global technology stage.

How the Current System Works

Modern technological standards are crucial for today’s digital economy. 5G and Wi-Fi standards, for example, enable connectivity between devices in various industries. 5G alone is projected to add up to €1 trillion to the European GDP and create up to 20 million jobs across all sectors of the economy between 2021 and 2025. These technical standards are typically developed collaboratively through standards-development organizations (SDOs) and include patented technology, called standard-essential patents (SEPs).

Companies working on the development of standards before SDOs are required to disclose patents they believe to be essential to a standard, and to commit to license such patents on fair, reasonable and non-discriminatory (FRAND) terms. For various reasons that are inherent to the system, there are far more disclosed patents that are potentially essential than there are patents that end up truly being essential for a standard. For example, one study calculated that there were 39,000 and 45,000 patents declared essential 3G UMTS and 4G LTE, respectively, while another estimated as many as 95,000 patent declarations for 5G. Commercial studies and litigated cases, however, provide a different picture. Only about 10% to 40%, respectively, of the disclosed patents were held to be truly essential for a standard.

The discrepancy between the tens of thousands of disclosed patents and the much lower number of truly essential patents is said to create an untransparent SEP-licensing landscape. The principal reason for such mismatch, however, is that SDO databases of disclosed patents were never intended to provide an accurate picture of truly essential patents to be used in licensing negotiations. For standardization, the much greater danger lies in the possibility of some patents remaining undeclared, thereby avoiding a FRAND commitment and jeopardizing successful market implementation. From that perspective, the broadest possible patent declarations are encouraged in order to guarantee that the standard will remain accessible to implementers on FRAND terms.

SEP licensing occurs both in bilateral negotiations and via patent pools. In bilateral negotiations, parties try to resolve various technical and commercial issues. Technical questions include:

  1. Whether and how many patents in a portfolio are truly essential;
  2. Whether such patents are infringed by standard-implementing products; and
  3. How many of these patents are valid.

Parties also need to agree on the commercial terms of a license, such as the level of royalties, the royalty-calculation methods, the availability of discounts, the amount of royalties for past sales, any cross-licensing provisions, etc.

SEP owners may also join their patents in a pool and license them in a single portfolio. Patent pools are known to significantly reduce transaction costs to all parties and provide a one-stop shop for implementers. Most licensing agreements are concluded amicably but, in cases where parties cannot agree, litigation may become necessary. The Huawei v ZTE case provided a framework for good-faith negotiation, and courts of the EU member states have become accustomed to evaluating the conduct of both parties.

What the Proposed Regulation Would Change

According to the Commission, SEP licensing is plagued with inefficiencies, apparently stemming from insufficient transparency and predictability regarding SEPs, uncertainty about FRAND terms and conditions, high enforcement costs, and inefficient enforcement.

As a solution, the leaked regulation would entrust the European Union Intellectual Property Office (EUIPO)—currently responsible for EU trademarks—with establishing a register of standards and SEPs, conducting essentiality checks that would assess whether disclosed patents are truly essential for a standard, providing the process to set up an aggregate royalty for a standard, and making individual FRAND-royalty determinations. The intention, it seems, is to replace market-based negotiations and institutions with centralized government oversight and price regulation.

How Many Standards and SEPs Are in the Regulation’s Scope?

From a legal standpoint, the first question raised by the regulation is, to what standards does it apply? The Commission, in its various studies, has often singled out 3G, 4G, and 5G cellular standards. This is probably because they have been in the headlines, due to international litigation and multi-million-euro FRAND determinations.

The regulation, however, would apparently apply to all SDOs that request SEP owners to license on FRAND terms and to any SEPs in force in any EU member state. This is a very broad definition that could potentially capture thousands of different standards across all sectors of the economy. Moreover, it isn’t limited just to European SDOs. All international SDOs that include at least one patent in an EU member state would also be ensnared by this rule.

To give a sense of the magnitude of the task, the European Telecommunications Standards Institute (ETSI), a large European SDO, boasts that it annually publishes between 2,000 and 2,500 standards, while the Institute of Electrical and Electronics Engineers (IEEE), an SDO based in the United States, claims to have more than 2,000 standards. Earlier studies found that there were at least 251 interoperability standards in a laptop, while an average smartphone is estimated to contain a minimum of 30 interoperability standards. In the laptop, 75% of standards were licensed under FRAND terms.

In short, we may be talking about thousands of standards to be reported and checked by the EUIPO. Not only is this duplicative work (SDOs already have their own databases), but it would entail significant costs to SEP owners.

Aggregate Royalties May Not Add Anything New

The proposed regulation would allow contributors to a standard (which aren’t limited to SEP owners; they could be any entity that submits technical contributions to an SDO, which may not be patented) to agree on the aggregate royalty for the standard. The idea behind aggregate royalty rates is to have transparency on the standard’s total price, so that implementers may account for royalties in the cost of their products. Furthermore, aggregate royalties may, theoretically, reduce the costs and facilitate SEP licensing, as the total royalty burden would be known in advance.

Beyond competition-law concerns (there are no mentions in the leaked regulation of any safeguards against exchanges of commercially sensitive information), it is not clear what practical effects the aggregate royalty-rate announcements would bring. Is it just a wishful theoretical maximum? To be on the safe side, contributors may just announce their maximum preference, knowing that—in the actual negotiations—prices would be lowered by caps and discounts. This is nothing new. We have already had individual SEP owners who publicly announced their royalty programs in advance for 4G and 5G. And patent pools bring price transparency to video-codec standards.

What’s more, agreement among all contributors is not required. Given that contributors have different business models (some may be vertically integrated, while others focus on technology development and licensing), it is difficult to imagine all of them coming to a consensus. The regulation would appear to allow different contributors to jointly notify their views on the aggregate royalty. This may add even more confusion to standard implementers. For example, some contributors could announce an aggregate rate of $10 per product, another 5% of the end-product price, while a third group would prefer a lower $1 per-product rate. In practice, the announcements of aggregate royalty rates may be meaningless.

Patent Essentiality Is Not the Same as Patent Infringement, Validity, or Value

The regulation also proposes to assess the essentiality of patents declared essential for a standard. It is hoped that this would improve transparency in the SEP landscape and help implementers assess with whom they need to license. For an implementer, however, it is important not only to know whether patents are essential for a standard, but also whether it infringes SEPs with its products and whether SEPs are valid.

A patent may be essential to a standard but not infringed by a concrete product. For example, a patent owner may have a 4G SEP that reads on base stations, but an implementer may manufacture and sell smartphones and thus does not infringe the relevant 4G SEP. Or a patent owner may hold SEPs that claim optional features of a standard, while an implementer may only use the standard’s mandatory features in its products. A study of U.S. SEP litigation found that SEPs were held to be infringed in only 30.7% of cases. In other words, in 69.3% of cases, an SEP was not considered to be infringed by accused products.

A patent may also be essential but invalid. Courts have the final say on whether granted patents fulfill patentability requirements. In the Unwired Planet v Huawei litigation in the UK, the court found two asserted patents valid, essential, and infringed, and two patents invalid.

Essentiality is, therefore, just one piece of the puzzle. Even if parties would accept the nonbinding essentiality determination (which is not guaranteed), they can still disagree over matters of infringement and validity. Essentiality checks are not a silver bullet that would eliminate all disputes.

Essentiality also should not be equated with the patent’s value. Not all patents are created equal. Some SEPs are related to breakthrough or core inventions, while others may be peripheral or optional. Economists have long found that the economic value of patents is highly skewed. Only a relatively small number of patents provide most of the value.

How Accurate and Reliable Is Sampling for Essentiality Assessments?

The leaked regulation provides that, every year, the EUIPO shall select a sample of claimed SEPs from each SEP owner, as well as from each specific standard, for essentiality checks. The Commission would adopt the precise methodology to ensure a fair and statistically valid selection that can produce sufficiently accurate results. Each SEP owner may also propose up to 100 claimed SEPs to be checked for essentiality for each specific standard.

The apparent goal of the samples is to reduce the costs of essentiality assessments. Analyzing essentiality is not a simple task. It takes time and money to produce accurate and reliable results. A thorough review of essentiality by patent pools was estimated to cost up to €10,000 and to last two to three days. Another study spent 40-50 working hours preparing claim charts that are used in essentiality assessments. If we consider that the EUIPO would potentially be directed to assess the essentiality of thousands of standards, it is easy to see how these costs could skyrocket and render the task impossible.

The use of samples is not without concerns. It inevitably introduces certain margins of error. Keith Mallinson has suggested that a sample size must be very large and include thousands of patents if any meaningful results are to be reached. It is therefore questionable why SEP owners would be limited to checking only 100 patents. Unless a widely accepted method to assess a large portfolio of declared patents were to be found, the results of these essentiality assessments would likely be imprecise and unreliable, and therefore fall far short of the goal of increased transparency.

The Dangers of a Top-Down Approach and Patent Counting for Royalty Determinations

Concealed in the regulation is the possibility that the EUIPO could use a top-down approach for royalty determinations, which provides that the SEP owner should receive a proportional share in the total aggregate royalty of a standard. It requires:

  1. Establishing a cumulative royalty for a standard; and then
  2. Calculating the share in the total royalty to an individual SEP owner.

Now we can see why the aggregate rate becomes important. The regulation would allow EUIPO to set up a panel of three conciliators to provide a nonbinding expert opinion on the aggregate royalty rate (in addition to, or regardless of, the rates already announced by contributors). Essentiality checks are also needed to filter out which patents are truly essential, and the number can be used to assess the individual share of SEP owners.

A detailed analysis of this top-down approach exceeds the scope of this post, but here are the key points:

  • The approach relies on patent counting, treating every patent as having the same value. We have seen that this is not the case, and that value is, instead, highly skewed. Moreover, essential patents may be invalid or not infringed by specific devices, which is not factored into the top-down calculations.
  • The top-down approach is not used in commercial-licensing negotiations, and courts have frequently rejected its application. Industry practice is to use comparable licensing agreements. The top-down approach was used in Unwired Planet v Huawei only as a cross-check for the rates derived from comparable agreements. TCL v Ericsson relied on this method, but was vacated on appeal. The most recent Interdigital v Lenovo judgment considered and rejected its use, finding “no value in Interdigital’s Top-Down cross-check in any of its guises.”
  • Fundamentally, the EUIPO’s top-down approach would be tantamount to direct government regulation of technology prices. So far, there are no studies suggesting that something is wrong with the level of royalties that might require government intervention. In fact, studies point to the opposite: prices are falling over time.

Conclusion

As discussed, the regulation provides an elaborate notification system of standards and declared SEPs, essentiality checks, and aggregate and individual royalty-rate determinations. Even with all these data points, however, it is not clear that it would help with licensing. Parties may not accept them and may still end up in court. 

Recent experience from the automotive sector demonstrates that knowing the essentiality and the price of SEPs did not translate into smoother licensing. Avanci is a platform that gathers almost all SEP owners for licensing 2G, 3G, and 4G SEPs to car manufacturers. It was intended to provide a one-stop-shop to licensees by offering a single price for the large portfolio of SEPs. All patents included in the Avanci platform were independently tested for essentiality. Avanci, however, was faced with the reluctance of implementers to take a licence. Only after litigating and prevailing did Avanci succeed in licensing the majority of the market. 

Paradoxically, the most innovative companies—the one that invest in the research and development of several different standardized solutions and rely on technology licensing as their business model—will bear the brunt of the regulation. It pays off, ironically, to be a user of standardized technology, rather than the innovator.

The introduction of such elaborate government regulation of SEP licensing also has important international ramifications. It is easy to imagine that other countries might not be so thrilled with European regulators setting the aggregate rate for international standards and individual rates for their companies’ portfolios. China, in particular, might see it as an example and set up its own centralized agencies for royalty determinations. What may happen if European, Chinese, or some other regulators come up with different aggregate and individual royalty rates? The whole international standardization system could crumble.

In short, the regulation imposes significant costs on SEP owners that innovate and contribute their technologies to international standardization. Faced with excessive costs and overregulation, companies may abandon open and collaborative international standardization, based on FRAND licensing, and instead work on proprietary solutions in smaller industry groups. This would allow them to escape the ambit of EU regulation. Whether this is a better alternative is up for debate.

The European Commission on March 27 showered the public with a series of documents heralding a new, more interventionist approach to enforce Article 102 of the Treaty on the Functioning of the European Union (TFEU), which prohibits “abuses of dominance.” This new approach threatens more aggressive, less economically sound enforcement of single-firm conduct in Europe.

EU courts may eventually constrain the Commission’s overreach in this area somewhat, but harmful business uncertainty will be the near-term reality. What’s more, the Commission’s new approach may unfortunately influence U.S. states that are considering European-style abuse-of-dominance amendments to their own substantive antitrust laws. As such, market-oriented U.S. antitrust commentators will need to be even more vigilant in keeping tabs of—and, where necessary, promptly critiquing—economically problematic shifts in European antitrust-enforcement policy.

The Commission’s Emerging Reassessment of Abuses of Dominance

In a press release summarizing its new initiative, the Commission made a “call for evidence” to obtain feedback on the adoption of first-time guidelines on exclusionary abuses of dominance under Article 102 TFEU.

In parallel, the Commission also published a “communication” announcing amendments to its 2008 guidance on enforcement priorities in challenging abusive exclusionary conduct. According to the press release, until final Article 102 guidelines are approved, this guidance “provides certain clarifications on its approach to determine whether to pursue cases of exclusionary conduct as a matter of priority.” An annex to the communication sets forth specific amendments to the 2008 guidance.

Finally, the Commission also released a competition policy brief (“a dynamic and workable effects-based approach to the abuse of dominance”) that discusses the policy justifications for the changes enumerated in the annex.

In short, the annex “toughens” the approach to abuse of dominance enforcement in five ways:

  1. It takes a broader view of what constitutes “anticompetitive foreclosure.” The Annex rejects the 2008 guidance’s emphasis on profitability (cases where a dominant firm can profitably maintain supracompetitive prices or profitably influence other parameters of competition) as key to prioritizing matters for enforcement. Instead, a new, far less-demanding prosecutorial standard is announced, one that views anticompetitive foreclosure as a situation “that allow[s] the dominant undertaking to negatively influence, to its own advantage and to the detriment of consumers, the various parameters of competition, such as price, production, innovation, variety or quality of goods or services.” Under this new approach, highly profitable competition on the merits (perhaps reflecting significant cost efficiencies) might be challenged, say, merely because enforcers were dissatisfied with a dominant firm’s particular pricing decisions, or the quality, variety, and “innovativeness” of its output. This would be a recipe for bureaucratic micromanagement of dominant firms’ business plans by competition-agency officials. The possibilities for arbitrary decision making by those officials, who may be sensitive to the interests of politically connected rent seekers (say, less-efficient competitors) are obvious.
  2. The annex diminishes the importance of economic efficiency in dominant-firm analysis. The Commission’s 2008 guidance specified that Commission enforcers “would generally intervene where the conduct concerned has already been or is capable of hampering competition from competitors that are considered to be as efficient as the dominant undertaking.” The revised 2023 guidance “recognizes that in certain circumstances a less efficient competitor should be taken into account when considering whether particular price-based conduct leads to anticompetitive foreclosure.” This amendment plainly invites selective-enforcement actions to assist less-efficient competitors, placing protection of those firms above consumer-welfare maximization. In order to avoid liability, dominant firms may choose to raise their prices or reduce their investments in cost-reducing innovations, so as to protect a relatively inefficient competitive fringe. The end result would be diminished consumer welfare.
  3. The annex encourages further micromanagement of dominant-firm pricing and other business decisions. Revised 2023 guidance invites the Commission to “examine economic data relating to prices” and to possible below-cost pricing, in considering whether a hypothetical as-efficient competitor would be foreclosed. Relatedly, the Commission encourages “taking into account other relevant quantitative and/or qualitative evidence” in determining whether an as-efficient competitor can compete “effectively” (emphasis added). This focus on often-subjective criteria such as “qualitative” indicia and the “effectiveness” of competition could subject dominant firms to costly new business-planning uncertainty. Similarly, the invitation to enforcers to “examine” prices may be viewed as a warning against “overaggressive” price discounting that would be expected to benefit consumers.
  4. The annex imposes new constraints on a firm’s decision as to whether or not to deal (beneficial voluntary exchange, an essential business freedom that underlies our free-market system – see here, for example). A revision to the 2008 guidance specifies that, “[i]n situations of constructive refusal to supply (subjecting access to ‘unfair conditions’), it is not appropriate to pursue as a matter of priority only cases concerning the provision of an indispensable input or the access to an essential facility.” This encourages complaints to Brussels enforcers by scores of companies that are denied an opportunity to deal with a dominant firm, due to “unfairness.” This may be expected to substantially undermine business efficiency, as firms stuck with the “dominant” label are required to enter into suboptimal supply relationships. Dynamic efficiency will also suffer, to the extent that intellectual-property holders are required to license on unfavorable terms (a reality that may be expected to diminish dominant firms’ incentives to invest in innovative activities).
  5. The annex threatens to increase the number of Commission “margin-squeeze” cases, whereby vertically integrated firms are required to offer favorable sales terms to, and thereby prop up, wholesalers who want to “compete” with them at retail. (See here for a more detailed discussion of the margin-squeeze concept.) The current standard for margin-squeeze liability already is far narrower in the United States than in Europe, due to the U.S. Supreme Court’s decision in linkLine (2009).

Specifically, the annex announces margin-squeeze-related amendments to the 2008 guidance. The amendments aim to clarify that “it is not appropriate to pursue as a matter of priority margin squeeze cases only where those cases involve a product or service that is objectively necessary to be able to compete effectively on the downstream market.” This extends margin-squeeze downstream competitor-support obligations far beyond regulated industries; how far, only time will tell. (See here for an economic study indicating that even the Commission’s current less-intrusive margin-squeeze policy undermines consumer welfare.) The propping up of less-efficient competitors may, of course, be facilitated by having the dominant firm take the lead in raising retail prices, to ensure that the propped-up companies get “fair margins.” Such a result diminishes competitive vigor and (once again) directly harms consumers.

In sum, through the annex’s revisions to the 2008 guidance, the Commission has, without public comment (and well prior to the release of new first-time guidelines), taken several significant steps that predictably will reduce competitive vitality and harm consumers in those markets where “dominant firms” exist. Relatedly, of course, to the extent that innovative firms respond to incentives to “pull their punches” so as not to become dominant, dynamic competition will be curtailed. As such, consumers will suffer, and economic welfare will diminish.

How Will European Courts Respond?

Fortunately, there is a ray of hope for those concerned about the European Commission’s new interventionist philosophy regarding abuses of dominance. Although the annex and the related competition policy brief cite a host of EU judicial decisions in support of revisions to the guidance, their selective case references and interpretations of judicial holdings may be subject to question. I leave it to EU law experts (I am not one) to more thoroughly parse specific judicial opinions cited in the March 27 release. Nevertheless, it seems to me that the Commission may face some obstacles to dramatically “stepping up” its abuse-of-dominance enforcement actions along the lines suggested by the annex. 

A number of relatively recent judicial decisions underscore the concerns that EU courts have demonstrated regarding the need for evidentiary backing and economic analysis to support the Commission’s findings of anticompetitive foreclosure. Let’s look at a few.

  • In Intel v. Commission (2017), the European Court of Justice (ECJ) held that the Commission had failed to adequately assess whether Intel’s conditional rebates on certain microprocessors were capable of restricting competition on the basis of the “as-efficient competitor” (AEC) test, and referred the case back to the General Court. The ECJ also held that the balancing of the favorable and unfavorable effects of Intel’s rebate practice could only be carried out after an analysis of that practice’s ability to exclude at least as-efficient-competitors.
  • In 2022, on remand, the General Court annulled the Commission’s determination (thereby erasing its 1.06 billion Euro fine) that Intel had abused its dominant position. The Court held that the Commission’s failure to respond to Intel’s argument that the AEC test was flawed, coupled with the Commission’s errors in its analysis of contested Intel practices, meant that the “analysis carried out by the Commission is incomplete and, in any event, does not make it possible to establish to the requisite legal standard that the rebates at issue were capable of having, or were likely to have, anticompetitive effects.”
  • In Unilever Italia (2023), the ECJ responded to an Italian Council of State request for guidance in light of the Italian Competition Authority’s finding that Unilever had abused its dominant position through exclusivity clauses that covered the distribution of packaged ice cream in Italy. The court found that a competition authority is obliged to assess the actual capacity to exclude by taking into account evidence submitted by the dominant undertaking (in this case, the Italian Authority had failed to do so). The ECJ stated that its 2017 clarification of rebate-scheme analysis in Intel also was applicable to exclusivity clauses.
  • Finally, in Qualcomm v. Commission (2022), the General Court set aside a 2018 Commission decision imposing a 1 billion Euro fine on Qualcomm for abuse of a dominant position in LTE chipsets. The Commission contended that Qualcomm’s 2011-2016 incentive payments to Apple for exclusivity reduced Apple’s incentive to shift suppliers and had the capability to foreclose Qualcomm’s competitors from the LTE-chipset market. The court found massive procedural irregularities by the Commission and held that the Commission had not shown that Qualcomm’s payments either had foreclosed or were capable of foreclosing competitors. The Court concluded that the Commission had seriously erred in the evidence it relied upon, and in its failure to take into account all relevant factors, as required under the 2022 Intel decision. 

These decisions are not, of course, directly related to the specific changes announced in the annex. They do, however, raise serious questions about how EU judges will view new aggressive exclusionary-conduct theories based on amendments to the 2008 guidance. In particular, EU courts have signaled that they will:

  1. closely scrutinize Commission fact-finding and economic analysis in evaluating exclusionary-abuse cases;
  2. require enforcers to carefully weigh factual and economic submissions put forth by dominant firms under investigation;
  3. require that enforcers take economic-efficiency arguments seriously; and
  4. continue to view the “as-efficient competitor” concept as important, even though the Commission may seek to minimize the test’s significance.

In other words, in the EU, as in the United States, reviewing courts may “put a crimp” in efforts by national competition agencies to read case law very broadly, so as to “rein in” allegedly abusive dominant-firm conduct. In jurisdictions with strong rule-of-law traditions, enforcers propose but judges dispose. The kicker, however, is that judicial review takes time. In the near term, firms will have to absorb additional business-uncertainty costs.

What About the States?

“Monopolization”—rather than the European “abuse of a dominant position”—is, of course, the key single-firm conduct standard under U.S. federal antitrust law. But the debate over the Commission’s abuse-of-dominance standards nonetheless is significant to domestic American antitrust enforcement.

Under U.S. antitrust federalism, the individual states are empowered to enact antitrust legislation that goes beyond the strictures of federal antitrust law. Currently, several major states—New York, Pennsylvania, and Minnesota—are considering antitrust bills that would add abuse of a dominant position as a new state antitrust cause of action (see here, here, here, and here). What’s more, the most populous U.S. state, California, may also consider similar legislation (see here). Such new laws would harmfully undermine consumer welfare (see my commentary here).

If certain states enacted a new abuse-of-dominance standard, it would be natural for their enforcers to look to EU enforcers (with their decades of relevant experience) for guidance in the area. As such, the annex (and future Commission guidelines, which one would expect to be consistent with the new annex guidance) could prove quite influential in promoting highly interventionist state policies that reach far beyond federal monopolization standards.

What’s worse, federal judicial case law that limits the scope of Sherman Act monopolization cases would have little or no influence in constraining state judges’ application of any new abuse-of-dominance standards. It is questionable that state judges would feel themselves empowered or even capable of independently applying often-confusing EU case law regarding abuse of dominance as a possible constraint on state officials’ prosecutions.

Conclusion

The Commission’s emerging guidance on abuse of dominance is bad for consumers and for competition. EU courts may constrain some Commission enforcement excesses, but that will take time, and new short-term business uncertainty costs are likely.

Moreover, negative effects may eventually also be felt in the United States if states enact proposed abuse-of-dominance prohibitions and state enforcers adopt the European Commission’s interventionist philosophy. State courts, applying an entirely new standard not found in federal law, should not be expected to play a significant role in curtailing aggressive state prosecutions for abuse of dominance.  

Promoters of principled, effects-based, economics-centric antitrust enforcement should take heed. They must be prepared to highlight the ramifications of both foreign and state-level initiatives as they continue to advocate for market-based antitrust policies. Sound law & economics training for state enforcers and judges likely will become more important than ever.  

Spring is here, and hope springs eternal in the human breast that competition enforcers will focus on welfare-enhancing initiatives, rather than on welfare-reducing interventionism that fails the consumer welfare standard.

Fortuitously, on March 27, the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) are hosting an international antitrust-enforcement summit, featuring senior state and foreign antitrust officials (see here). According to an FTC press release, “FTC Chair Lina M. Khan and DOJ Assistant Attorney General Jonathan Kanter, as well as senior staff from both agencies, will facilitate discussions on complex challenges in merger and unilateral conduct enforcement in digital and transitional markets.”

I suggest that the FTC and DOJ shelve that topic, which is the focus of endless white papers and regular enforcement-oriented conversations among competition-agency staffers from around the world. What is there for officials to learn? (Perhaps they could discuss the value of curbing “novel” digital-market interventions that undermine economic efficiency and innovation, but I doubt that this important topic would appear on the agenda.)

Rather than tread familiar enforcement ground (albeit armed with novel legal theories that are known to their peers), the FTC and DOJ instead should lead an international dialogue on applying agency resources to strengthen competition advocacy and to combat anticompetitive market distortions. Such initiatives, which involve challenging government-generated impediments to competition, would efficiently and effectively promote the Biden administration’s “whole of government” approach to competition policy.

Competition Advocacy

The World Bank and the Organization for Economic Cooperation and Development (OECD) have jointly described the role and importance of competition advocacy:

[C]ompetition may be lessened significantly by various public policies and institutional arrangements as well [as by private restraints]. Indeed, private restrictive business practices are often facilitated by various government interventions in the marketplace. Thus, the mandate of the competition office extends beyond merely enforcing the competition law. It must also participate more broadly in the formulation of its country’s economic policies, which may adversely affect competitive market structure, business conduct, and economic performance. It must assume the role of competition advocate, acting proactively to bring about government policies that lower barriers to entry, promote deregulation and trade liberalization, and otherwise minimize unnecessary government intervention in the marketplace.

The FTC and DOJ have a proud history of competition-advocacy initiatives. In an article exploring the nature and history of FTC advocacy efforts, FTC scholars James Cooper, Paul Pautler, & Todd Zywicki explained:

Competition advocacy, broadly, is the use of FTC expertise in competition, economics, and consumer protection to persuade governmental actors at all levels of the political system and in all branches of government to design policies that further competition and consumer choice. Competition advocacy often takes the form of letters from the FTC staff or the full Commission to an interested regulator, but also consists of formal comments and amicus curiae briefs.

Cooper, Pautler, & Zywicki also provided guidance—derived from an evaluation of FTC public-interest interventions—on how advocacy initiatives can be designed to maximize their effectiveness.

During the Trump administration, the FTC’s Economic Liberty Task Force shone its advocacy spotlight on excessive state occupational-licensing restrictions that create unwarranted entry barriers and distort competition in many lines of work. (The Obama administration in 2016 issued a report on harms to workers that stem from excessive occupational licensing, but it did not accord substantial resources to advocacy efforts in this area.)

Although its initiatives in this area have been overshadowed in recent decades by the FTC, DOJ over the years also has filed a large number of competition-advocacy comments with federal and state entities.

Anticompetitive Market Distortions (ACMDs)

ACMDs refer to government-imposed restrictions on competition. These distortions may take the form of distortions of international competition (trade distortions), distortions of domestic competition, or distortions of property-rights protection (that with which firms compete). Distortions across any of these pillars could have a negative effect on economic growth. (See here.)

Because they enjoy state-backed power and the force of law, ACMDs cannot readily be dislodged by market forces over time, unlike purely private restrictions. What’s worse, given the role that governments play in facilitating them, ACMDs often fall outside the jurisdictional reach of both international trade laws and domestic competition laws.

The OECD’s Competition Assessment Toolkit sets forth four categories of regulatory restrictions that distort competition. Those are provisions that:

  1. limit the number or range of providers;
  2. limit the ability of suppliers to compete;
  3. reduce the incentive of suppliers to compete; and that
  4. limit the choices and information available to consumers.

When those categories explicitly or implicitly favor domestic enterprises over foreign enterprises, they may substantially distort international trade and investment decisions, to the detriment of economic efficiency and consumer welfare in multiple jurisdictions.

Given the non-negligible extraterritorial impact of many ACMDs, directing the attention of foreign competition agencies to the ACMD problem would be a particularly efficient use of time at gatherings of peer competition agencies from around the world. Peer competition agencies could discuss strategies to convince their governments to phase out or limit the scope of ACMDs.

The collective action problem that may prevent any one jurisdiction from acting unilaterally to begin dismantling its ACMDs might be addressed through international trade negotiations (perhaps, initially, plurilateral negotiations) aimed at creating ACMD remedies in trade treaties. (Shanker Singham has written about crafting trade remedies to deal with ACMDs—see here, for example.) Thus, strategies whereby national competition agencies could “pull in” their fellow national trade agencies to combat ACMDs merit exploration. Why not start the ball rolling at next week’s international antitrust-enforcement summit? (Hint, why not pull in a bunch of DOJ and FTC economists, who may feel underappreciated and underutilized at this time, to help out?)

Conclusion

If the Biden administration truly wants to strengthen the U.S. economy by bolstering competitive forces, the best way to do that would be to reallocate a substantial share of antitrust-enforcement resources to competition-advocacy efforts and the dismantling of ACMDs.

In order to have maximum impact, such efforts should be backed by a revised “whole of government” initiative – perhaps embodied in a new executive order. That new order should urge federal agencies (including the “independent” agencies that exercise executive functions) to cooperate with the DOJ and FTC in rooting out and repealing anticompetitive regulations (including ACMDs that undermine competition by distorting trade flows).

The DOJ and FTC should also be encouraged by the executive order to step up their advocacy efforts at the state level. The Office of Management and Budget (OMB) could be pulled in to help identify ACMDs, and the U.S. Trade Representative’s Office (USTR), with DOJ and FTC economic assistance, could start devising an anti-ACMD negotiating strategy.

In addition, the FTC and DOJ should directly urge foreign competition agencies to engage in relatively more competition advocacy. The U.S. agencies should simultaneously push to make competition-advocacy promotion a much higher International Competition Network priority (see here for the ICN Advocacy Working Group’s 2022-2025 Work Plan). The FTC and DOJ could simultaneously encourage their competition-agency peers to work with their fellow trade agencies (USTR’s peer bureaucracies) to devise anti-ACMD negotiating strategies.

These suggestions may not quite be ripe for meetings to be held in a few days. But if the administration truly believes in an all-of-government approach to competition, and is truly committed to multilateralism, these recommendations should be right up its alley. There will be plenty of bilateral and plurilateral trade and competition-agency meetings (not to mention the World Bank, OECD, and other multilateral gatherings) in the next year or so at which these sensible, welfare-enhancing suggestions could be advanced. After all, “hope springs eternal in the human breast.”

In February’s FTC roundup, I noted an op-ed in the Wall Street Journal in which Commissioner Christine Wilson announced her intent to resign from the Federal Trade Commission. Her departure, and her stated reasons therefore, were not encouraging for those of us who would prefer to see the FTC function as a stable, economically grounded, and genuinely bipartisan independent agency. Since then, Wilson has specified her departure date: March 31, two weeks hence. 

With Wilson’s departure, and that of Commissioner Noah Phillips in October 2022 (I wrote about that here, and I recommend Alden Abbott’s post on Noah Phillips’ contribution to the 1-800 Contacts case), we’ll have a strictly partisan commission—one lacking any Republican commissioners or, indeed, anyone who might properly be described as a moderate or mainstream antitrust lawyer or economist. We shall see what the appointment process delivers and when; soon, I hope, but I’m not holding my breath.

Next Comes Exodus

As followers of the FTC—faithful, agnostic, skeptical, or occasional—are all aware, the commissioners have not been alone in their exodus. Not a few staffers have left the building. 

In a Bloomberg column just yesterday, Dan Papscun covers the scope of the departures, “at a pace not seen in at least two decades.” Based on data obtained from a Bloomberg Freedom of Information Act request, Papscun notes the departure of “99 senior-level career attorneys” from 2021-2022, including 71 experienced GS-15 level attorneys and 28 from the senior executive service.

To put those numbers in context, this left the FTC—an agency with dual antitrust and consumer-protection authority ranging over most of domestic commerce—with some 750 attorneys at the end of 2022. That’s a decent size for a law firm that lacks global ambitions, but a little lean for the agency. Papscun quotes Debbie Feinstein, former head of the FTC’s Bureau of Competition during the Obama administration: “You lose a lot of institutional knowledge” with the departure of senior staff and career leaders. Indeed you do.

Onward and Somewhere

The commission continues to scrutinize noncompete terms in employment agreements by bringing cases, even as it entertains comments on its proposal to ban nearly all such terms by regulation (see here, here, here, here, here, here, here, here, and here for “a few” ToTM posts on the proposal). As I noted before, the NPRM cites three recent settlements of Section 5 cases against firms’ use of noncompetes as a means of documenting the commission’s experience with such terms. It’s important to define one’s terms clearly. By “cases,” I mean administrative complaints resolved by consent orders, with no stipulation of any antitrust violation, rather than cases litigated to their conclusion in federal court. And by  “recent,” I mean settlements announced the very day before the publication of the NPRM. 

Also noted was the brevity of the complaints, and the memoranda and orders memorializing the settlements. It’s entirely possible that the FTC’s allegations in one, two, or all of the matters were correct, but based on the public documents, it’s hard to tell how the noncompetes violated Section 5. Commissioner Wilson noted as much in her dissents (here and here).

On March 15, the FTC’s record on noncompete cases grew by a third; that is, the agency announced a fourth settlement (again in an administrative process, and again without a decision on the merits or a stipulation of an antitrust violation). Once again, the public documents are . . . compact, providing little by way of guidance as to how (in the commission’s view), the specific terms of the agreements violated Section 5 (of course, if—as suggested in the NPRM—all such terms violate Section 5, then there you go). Again, Commissioner Wilson noticed

Here’s a wrinkle: the staff do seem to be building on their experience regarding the use of noncompete terms in the glass container industry. Of the four noncompete competition matters now settled (all this year), three—including the most recent—deal with firms in the glass-container industry, which, according to the allegations, is highly concentrated (at least in its labor markets). The NPRM asked for input on its sweeping proposed rule, but it also asked for input on possible regulatory alternatives. A smarter aleck than myself might suggest that they consider regulating the use of noncompetes in the glass-container industry, given the commission’s burgeoning experience in this specific labor market (or markets).

Someone Deserves a Break Today

The commission’s foray into labor matters continues, with a request for information  (RFI) on “the means by which franchisors exert control over franchisees and their workers.” On the one hand, the commission has a longstanding consumer-protection interest in the marketing of franchises, enforcing its Franchise Rule, which was first adopted in 1978 and amended in 2007. The rule chiefly requires certain disclosures—23 of them—in marketing franchise opportunities to potential franchisees. Further inquiry into the operation of the rule, and recent market developments, could be part of the normal course of regulatory business. 

But this is not exactly that. The RFI raises a panoply of questions about both competition and consumer-protection issues, well beyond the scope of the rule, that may pertain to franchise businesses. It asks, among other things, how the provisions of franchise agreements “affects franchisees, consumers, workers, and competition, or . . . any justifications for such provision[s].”  Working its way back to noncompetes: 

The FTC is currently seeking public comment on a proposed rule to ban noncompete clauses for workers in some situations. As part of that proposed rulemaking, the FTC is interested in public comments on the question of whether that proposed rule should also apply to noncompete clauses between franchisors and franchisees.

As Alden Abbott observed, franchise businesses represent a considerable engine of economic growth. That’s not to say that a given franchisor cannot run afoul of either antitrust or consumer-protection law, but it does suggest that there are considerable positive aspects to many franchisor/franchisee relationships, and not just potential harms.

If that’s right, one might wonder whether the commission’s litany of questions about “the means by which franchisors exert control over franchisees and their workers” represents a neutral inquiry into a complex class of business models employed in diverse industries. If you’re still wondering, Elizabeth Wilkins, director of the FTC’s Office of Policy Planning (full disclosure, she was my boss for a minute, and, in my opinion, a good manager) issued a spoiler alert: “This RFI will begin to unravel how the unequal bargaining power inherent in these contracts is impacting franchisees, workers, and consumers.” What could be more neutral than that? 

The RFI also seeks input on the use of intra-franchise no-poach agreements, a relatively narrow but still significant issue for franchise brand development. More about us: a recent amicus brief filed by the International Center for Law & Economics and 20 scholars of antitrust law and economics (including your humble scribe, but also, and not for nothin’, a Nobel laureate), explains some of the pro-competitive potential of such agreements, both generally and with a focus on a specific case, Delandes v. McDonald’s.

It’s here, if you or the commission are interested.

Franchising plays a key role in promoting American job creation and economic growth. As explained in Forbes (hyperlinks omitted):

Franchise businesses help drive growth in local, state and national economies. They are major contributors to small business growth and job creation in nearly every local economy in the United States. On a local level, growth is spurred by a number of successful franchise impacts, including multiple new locations opening in the area and the professional development opportunities they provide for the workforce.

Franchises Create Jobs

What kind of impact do franchises have on national economic data and job growth? All in all, small businesses like franchises generate more than 60 percent of all jobs added annually in the U.S., according to the Bureau of Labor Statistics.

Although it varies widely by state, you will often find that the highest job creation market leaders are heavily influenced by franchising growth. The national impact of franchising, according to the IFA Economic Impact Study conducted by IHS Market Economics in January 2018, is huge.

By the numbers:

  • There are 733,000 franchised establishments in the Unites States
  • Franchising directly creates 7.6 million jobs
  • Franchising indirectly supports 13.3 million jobs
  • Franchising directly accounts for $404.6 billion in GDP
  • Franchising indirectly accounts for $925.9 billion in GDP

Franchises Drive Economic Growth

How do franchises spur economic growth? Successful franchise brands can grow new locations at a faster rate than other types of small businesses. Individual franchise locations create jobs, and franchise networks multiply the jobs they create by replicating in more markets — or often in more locations in a single market if demand allows. The more they succeed, the greater the multiplier.

It’s also a matter of longevity. According to the Small Business Administration (SBA), 50 percent of new businesses fail during the first five years. Franchises can offer greater sustainability than non-franchised businesses. Franchises are much more likely to be operating after five years. This means more jobs being created longer for each location opened.

Successful franchise brands help stack the deck in favor of success by offering substantial administrative and marketing support for individual locations. Success for the brands means success for the overall economy, driving a virtuous cycle of growth.

Franchising as a business institution is oriented toward reducing economic inefficiencies in commercial relationships. Specifically, economic analysis reveals that it is a potential means for dealing with opportunism and cabining transaction costs in vertical-distribution contracts. In a survey article in the Encyclopedia of Law & Economics, Antony Dnes explores capital raising, agency, and transactions-cost-control theories of franchising. He concludes:

Several theories have been constructed to explain franchising, most of which emphasize savings of monitoring costs in an agency framework. Details of the theories show how opportunism on the part of both franchisors and franchisees may be controlled. In separate developments, writers have argued that franchisors recruit franchisees to reduce information-search costs, or that they signal franchise quality by running company stores.

Empirical studies tend to support theories emphasizing opportunism on the part of franchisors and franchisees. Thus, elements of both agency approaches and transactions-cost analysis receive support. The most robust finding is that franchising is encouraged by factors like geographical dispersion of units, which increases monitoring costs. Other key findings are that small units and measures of the importance of the franchisee’s input encourage franchising, whereas increasing the importance of the franchisor’s centralized role encourages the use of company stores. In many key respects, in result although not in principle, transaction-cost analysis and agency analysis are just two different languages describing the same franchising phenomena.

In short, overall, franchising has proven to be an American welfare-enhancement success story.

There is, however, a three-letter regulatory storm cloud on the horizon that could eventually threaten to undermine economically beneficial franchising. In a March 10 press release, the Federal Trade Commission (FTC) “requests [public] comment[s] on franchise agreements and franchisor business practices, including how franchisors may exert control over franchisees and their workers.” The public will have 60 days to submit comments in response to this request for information (RFI).

Language in the FTC’s press release makes it clear that the commission’s priors are to be skeptical of (if not downright hostile toward) the institution of franchising. The director of the FTC’s Bureau of Consumer Protection notes that there is “growing concern around unfair and deceptive practices in the franchise industry.” The director of the FTC Office of Policy Planning states that “[i]t’s clear that, at least in some instances, the promise of franchise agreements as engines of economic mobility and gainful employment is not being fully realized.” She adds that “[t]his RFI will begin to unravel how the unequal bargaining power inherent in these contracts is impacting franchisees, workers, and consumers.” The references to “unequal bargaining power” and “workers” once again highlight this FTC’s unfortunate fascination with issues that fall outside the proper scope of its competition and consumer-protection mandates.

The FTC’s press release lists representative questions on which it hopes to receive comments, including specifically:

franchisees’ ability to negotiate the terms of franchise agreements before signing, and the ability of franchisors to unilaterally make changes to the franchise system after franchisees join;

franchisors’ enforcement of non-disparagement, goodwill or similar clauses;

the prevalence and justification for certain contract terms in franchise agreements;

franchisors’ control over the wages and working conditions in franchised entities, other than through the terms of franchise agreements;

payments or other consideration franchisors receive from third parties (e.g., suppliers, vendors) related to franchisees’ purchases of goods or services from those third parties;

indirect effects on franchisee labor costs related to franchisor business practices; and

the pervasiveness and rationale for franchisors marketing their franchises using languages other than English.

This litany by implication casts franchisors in a negative light, and suggests a potential FTC interest in micromanaging the terms of franchise contractual agreements. Presumably, this would be accomplished through a new proposed rule to be issued after the RFI responses are received. Such “expert” micromanagement reflects a troublesome FTC pretense of regulatory knowledge.

But hold on, the worst is still to come. To top it all off, the press release closes by asking for comments on whether the commission’s highly problematic proposed rule on noncompete agreements should apply to noncompete clauses between franchisors and franchisees.

Barring noncompetes could severely undermine the incentive of franchisors to create new franchising opportunities in the first place, thereby reducing the use of franchising and denying new business opportunities to potential franchisees. Job creation and economic growth prospects would be harmed. As a result, franchise workers, small businesses, and consumers (who enjoy patronizing franchise outlets because of the quality assurance associated with a franchise trademark) would suffer.

The only saving grace is that a final FTC noncompete rule likely would be struck down in court. Before that happened, however, many rationally risk-averse firms would discontinue using welfare-beneficial noncompetes—including in franchising, assuming franchising was covered by the final rule.

As it is, FTC law and state-consumer protection law already provide more than ample protection for franchisees in their relationship with franchisors. The FTC’s Franchise Rule requires franchisors to make key disclosures upfront before people make a major investment. What’s more, the FTC Act prohibits material misrepresentations about any business opportunity, including franchises.

Moreover, as the FTC itself admits, franchisees may be able to use state statutes that prohibit unfair or deceptive practices to challenge conduct that violates the Franchise Rule or truth-in-advertising standards.  

The FTC should stick with its current consumer-protection approach and ignore  the siren song of micromanaging (and, indeed, discouraging) franchisor-franchisee relationships. If it is truly concerned about the economic welfare of consumers and producers, it should immediately withdraw the RFI.

The 117th Congress closed out without a floor vote on either of the major pieces of antitrust legislation introduced in both chambers: the American Innovation and Choice Online Act (AICOA) and the Open Apps Market Act (OAMA). But it was evident at yesterday’s hearing of the Senate Judiciary Committee’s antitrust subcommittee that at least some advocates—both in academia and among the committee leadership—hope to raise those bills from the dead.

Of the committee’s five carefully chosen witnesses, only New York University School of Law’s Daniel Francis appeared to appreciate the competitive risks posed by AICOA and OAMA—noting, among other things, that the bills’ failure to distinguish between harm to competition and harm to certain competitors was a critical defect.

Yale School of Management’s Fiona Scott Morton acknowledged that ideal antitrust reforms were not on the table, and appeared open to amendments. But she also suggested that current antitrust standards were deficient and, without much explanation or attention to the bills’ particulars, that AICOA and OAMA were both steps in the right direction.

Subcommittee Chair Amy Klobuchar (D-Minn.), who sponsored AICOA in the last Congress, seems keen to reintroduce it without modification. In her introductory remarks, she lamented the power, wealth (if that’s different), and influence of Big Tech in helping to sink her bill last year.

Apparently, firms targeted by anticompetitive legislation would rather they weren’t. Folks outside the Beltway should sit down for this: it seems those firms hire people to help them explain, to Congress and the public, both the fact that they don’t like the bills and why. The people they hire are called “lobbyists.” It appears that, sometimes, that strategy works or is at least an input into a process that sometimes ends, more or less, as they prefer. Dirty pool, indeed. 

There are, of course, other reasons why AICOA and OAMA might have stalled. Had they been enacted, it’s very likely that they would have chilled innovation, harmed consumers, and provided a level of regulatory discretion that would have been very hard, if not impossible, to dial back. If reintroduced and enacted, the bills would be more likely to “rein in” competition and innovation in the American digital sector and, specifically, targeted tech firms’ ability to deliver innovative products and services to tens of millions of (hitherto very satisfied) consumers.

Our colleagues at the International Center for Law & Economics (ICLE) and its affiliated scholars, among others, have explained why. For a selected bit of self-plagiarism, AICOA and OAMA received considerable attention in our symposium on Antitrust’s Uncertain Future; ICLE’s Dirk Auer had a Truth on the Market post on AICOA; and Lazar Radic wrote a piece on OAMA that’s currently up for a Concurrences award.

To revisit just a few critical points:

  1. AICOA and OAMA both suppose that “self-preferencing” is generally harmful. Not so. A firm might invest in developing a successful platform and ecosystem because it expects to recoup some of that investment through, among other means, preferred treatment for some of its own products. Exercising a measure of control over downstream or adjacent products might drive the platform’s development in the first place (see here and here for some potential advantages). To cite just a few examples from the empirical literature, Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand, not just for Instagram, but for the entire category of photography apps; Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally; and Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expanded the consoles’ installed base. As a result, they increase the potential for independent game developers, even in the face of competition from first-party games.
  2. AICOA and OAMA, in somewhat different ways, favor open systems, interoperability, and/or data portability. All of these have potential advantages but, equally, potential costs or disadvantages. Whether any is procompetitive or anticompetitive depends on particular facts and circumstances. In the abstract, each represents a business model that might well be procompetitive or benign, and that consumers might well favor or disfavor. For example, interoperability has potential benefits and costs, and, as Sam Bowman has observed, those costs sometimes exceed the benefits. For instance, interoperability can be exceedingly costly to implement or maintain, and it can generate vulnerabilities that challenge or undermine data security. Data portability can be handy, but it can also harm the interests of third parties—say, friends willing to be named, or depicted in certain photos on a certain platform, but not just anywhere. And while recent commentary suggests that the absence of “open” systems signals a competition problem, it’s hard to understand why. There are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.
  3. AICOA and OAMA both embody dubious assumptions. For example, underlying AICOA is a supposition that vertical integration is generally (or at least typically) harmful. Critics of established antitrust law can point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. And it is, in fact, possible for vertical mergers or other vertical conduct to harm competition. But that possibility, and the findings of these few studies, are routinely overstated. The weight of the empirical evidence shows that vertical integration tends to be competitively benign. For example, widely acclaimed meta-analysis by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade led them to conclude:

“[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. . . .  We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.”

  1. Network effects and data advantages are not insurmountable, nor even necessarily harmful. Advantages of scope and scale for data sets vary according to the data at issue; the context and analytic sophistication of those with access to the data and application; and are subject to diminishing returns, in any case. Simple measures of market share or other numerical thresholds may signal very little of competitive import. See, e.g., this on the contestable platform paradox; Carl Shapiro on the putative decline of competition and irrelevance of certain metrics; and, more generally, antitrust’s well-grounded and wholesale repudiation of the Structure-Conduct-Performance paradigm.

These points are not new. As we note above, they’ve been made more carefully, and in more detail, before. What’s new is that the failure of AICOA and OAMA to reach floor votes in the last Congress leaves their sponsors, and many of their advocates, unchastened.

Conclusion

At yesterday’s hearing, Sen. Klobuchar noted that nations around the world are adopting regulatory frameworks aimed at “reining in” American digital platforms. True enough, but that’s exactly what AICOA and OAMA promise; they will not foster competition or competitiveness.

Novel industries may pose novel challenges, not least to antitrust. But it does not follow that the EU’s Digital Markets Act (DMA), proposed policies in Australia and the United Kingdom, or AICOA and OAMA represent beneficial, much less optimal, policy reforms. As Francis noted, the central commitments of OAMA and AICOA, like the DMA and other proposals, aim to help certain firms at the expense of other firms and consumers. This is not procompetitive reform; it is rent-seeking by less-successful competitors.

AICOA and OAMA were laid to rest with the 117th Congress. They should be left to rest in peace.

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

[This is a guest post from Mario Zúñiga of EY Law in Lima, Perú. An earlier version was published in Spanish on the author’s personal blog. He gives thanks to Hugo Figari and Walter Alvarez for their comments on the initial version and special thanks to Lazar Radic for his advice and editing of the English version.]

There is a line of thinking according to which, without merger-control rules, antitrust law is “incomplete.”[1] Without such a regime, the argument goes, whenever a group of companies faces with the risk of being penalized for cartelizing, they could instead merge and thus “raise prices without any legal consequences.”[2]

A few months ago, at a symposium that INDECOPI[3] organized for the first anniversary the Peruvian Merger Control Act’s enactment,[4] Rubén Maximiano of the OECD’s Competition Division argued in support of the importance of merger-control regimes with the assessment that mergers are “like the ultimate cartel” because a merged firm could raise prices “with impunity.”

I get Maximiano’s point. Antitrust law was born, in part, to counter the rise of trusts, which had been used to evade the restriction that common law already imposed on “restraints of trade” in the United States. Let’s not forget, however, that these “trusts” were essentially a facade used to mask agreements to fix prices, and only to fix prices.[5] They were not real combinations of two or more businesses, as occurs in a merger. Therefore, even if one agree that it is important to scrutinize mergers, describing them as an alternative means of “cartelizing” is, to say the least, incomplete.

While this might seem to some to be a debate about mere semantics, I think is relevant to the broader context in which competition agencies are being pushed from various fronts toward a more aggressive application of merger-control rules.[6]

In describing mergers only as a strategy to gain more market power, or market share, or to expand profit margins, we would miss something very important: how these benefits would be obtained. Let’s not forget what the goal of antitrust law actually is. However we articulate this goal (“consumer welfare” or “the competitive process”), it is clear that antitrust law is more concerned with protecting a process than achieving any particular final result. It protects a dynamic in which, in principle, the market is trusted to be the best way to allocate resources.

In that vein, competition policy seeks to remove barriers to this dynamic, not to force a specific result. In this sense, it is not just what companies achieve in the market that matters, but how they achieve it. And there’s an enormous difference between price-fixing and buying a company. That’s why antitrust law gives a different treatment to “naked” agreements to collude while also contemplating an “ancillary agreements” doctrine.

By accepting this (“ultimate cartel”) approach to mergers, we would also be ignoring decades of economics and management literature. We would be ignoring, to start, the fundamental contributions of Ronald Coase in “The Nature of the Firm.” Acquiring other companies (or business lines or assets) allows us to reduce transaction costs and generate economies of scale in production. According to Coase:

The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of ‘organising’ production through the price mechanism is that of discovering what the relevant prices are. This cost may be reduced but it will not be eliminated by the emergence of specialists who will sell this information. The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.

The simple answer to that could be to enter into long-term contracts, but Coase notes that that’s not that easy. He explains that:

There are, however, other disadvantages-or costs of using the price mechanism. It may be desired to make a long-term contract for the supply of some article or service. This may be due to the fact that if one contract is made for a longer period, instead of several shorter ones, then certain costs of making each contract will be avoided. Or, owing to the risk attitude of the people concerned, they may prefer to make a long rather than a short-term contract. Now, owing to the difficulty of forecasting, the longer the period of the contract is for the supply of the commodity or service, the less possible, and indeed, the less desirable it is for the person purchasing to specify what the other contracting party is expected to do.

Coase, to be sure, makes this argument mainly with respect to vertical mergers, but I think it may be applicable to horizontal mergers, as well, to the extent that the latter generate “economies of scale.” Moreover, it’s not unusual for many acquisitions that are classified as “horizontal” to also have a “vertical” component (e.g., a consumer-goods company may buy another company in the same line of business because it wants to take advantage of the latter’s distribution network; or a computer manufacturer may buy another computer company because it has an integrated unit that produces microprocessors).

We also should not leave aside the entrepreneurship element, which frequently is ignored in the antitrust literature and in antitrust law and policy. As Israel Kirzner pointed out more than 50 years ago:

An economics that emphasizes equilibrium tends, therefore, to overlook the role of the entrepreneur. His role becomes somehow identified with movements from one equilibrium position to another, with ‘innovations,’ and with dynamic changes, but not with the dynamics of the equilibrating process itself.

Instead of the entrepreneur, the dominant theory of price has dealt with the firm, placing the emphasis heavily on its profit-maximizing aspects. In fact, this emphasis has misled many students of price theory to understand the notion of the entrepreneur as nothing more than the focus of profit-maximizing decision-making within the firm. They have completely overlooked the role of the entrepreneur in exploiting superior awareness of price discrepancies within the economic system.”

Working in mergers and acquisitions, either as an external advisor or in-house counsel, has confirmed the aforementioned for me (anecdotal evidence, to be sure, but with the advantage of allowing very in-depth observations). Firms that take control of other firms are seeking to exploit the comparative advantages they may have over whoever is giving up control. Sometimes a company has (or thinks it has) knowledge or assets (greater knowledge of the market, better sales strategies, a broader distribution network, better access to credit, among many other potential advantages) that allow it to make better use of the seller’s existing assets.

An entrepreneur is successful because he or she sees what others do not see. Beatriz Boza summarizes it well in a section of her book “Empresarios” in which she details the purchase of the Santa Isabel supermarket chain by Intercorp (one of Peru’s biggest conglomerates). The group’s main shareholder, Carlos Rodríguez-Pastor, had already decided to enter the retail business and the opportunity came in 2003 when the Dutch group Ahold put Santa Isabel up for sale. The move was risky for Intercorp, in that Santa Isabel was in debt and operating at a loss. But Rodríguez-Pastor had been studying what was happening similar markets in other countries and knew that having a stake in the supermarket business would allow him to reach more consumer-credit customers, in addition to offering other vertical-integration opportunities. In retrospect, the deal can only be described as a success. In 2014, the company reached 34.1% market share and took in revenues of more than US$1.25 billion, with an EBITDA margin of 6.2%. Rodríguez-Pastor saw the synergies that others did not see, but he also dared to take the risk. As Boza writes:

 ‘Nobody ever saw the synergies,’ concludes the businessman, reminding the businessmen and executives who warned him that he was going to go bankrupt after the acquisition of Ahold’s assets. ‘Today we have a retail circuit that no one else can have.’

Competition authorities need to recognize these sorts of synergies and efficiencies,[7] and take them into account as compensating effects even where the combination might otherwise represent some risk to competition. That is why the vast majority of proposed mergers are approved by competition authorities around the world.

There is some evidence of companies that were sanctioned in cartel cases later choose to merge,[8] but what this requires is that the competition authorities put more effort into prosecuting those mergers, not that they adopt a much more aggressive approach to reviewing all mergers.

I am not proposing, of course, that we should abolish merger control or even that it should necessarily be “permissive.” Some mergers may indeed represent a genuine risk to competition. But in analyzing them, employing technical analytic techniques and robust evidence, it is important to recognize that entrepreneurs may have countless valid business reasons to carry out a merger—reasons that are often not fully formalized or even understood by the entrepreneurs themselves, since they operate under a high degree of uncertainty and risk.[9] An entrepreneur’s primary motivation is to maximize his or her own benefit, but we cannot just assume that this will be greater after “concentrating” markets.[10]

Competition agencies must recognize this, and not simply presume anticompetitive intentions or impacts. Antitrust law—and, in particular, the concentration-control regimes throughout the world—require that any harm to competition must be proved, and this is so precisely because mergers are not like cartels.


[1] The debate prior to the enactment of Peru’s Merger Control Act became too politicized and polarized. Opponents went so far as to affirm that merger control was “unconstitutional” (highly debatable) or that it constituted an interventionist policy (something that I believe cannot be assumed but is contingent on the type of regulation that is approved or how it is applied). On the other hand, advocates of the regulation claimed an inevitable scenario of concentrated markets and monopolies if the act was not approved (without any empirical evidence of this claim). My personal position was initially skeptical, considering that the priority—from a competition policy point of view, at least in a developing economy like Peru—should continue to be deregulation to remove entry barriers and to prosecute cartels. That being said, a well-designed and well-enforced merger-control regime (i.e., one that generally does not block mergers that are not harmful to competition; is agile; and has adequate protection from political interference) does not have to be detrimental to markets and can generate benefits in terms of avoiding anti-competitive mergers.

In Peru, the Commission for the Defense of Free Competition and its Technical Secretariat have been applying the law pretty reasonably. To date, of more than 20 applications, the vast majority have been approved without conditions, and one conditionally. In addition, approval requests have been resolved in an average of 23 days, below the legal term.

[2] See, e.g., this peer-reviewed 2018 OECD report: “The adoption of a merger control regime should be a priority for Peru, since in its absence competitors can circumvent the prohibition against anticompetitive agreements by merging – with effects potentially similar to those of a cartel immune from antitrust scrutiny.”

[3] National Institute for the Defense of Competition and the Protection of Intellectual Property (INDECOPI, after its Spanish acronym), is the Peruvian competition agency. It is an administrative agency with a broad scope of tasks, including antitrust law, unfair competition law, consumer protection, and intellectual property registration, among others. It can adjudicate cases and impose fines. Its decisions can be challenged before courts.

[4] You can watch the whole symposium (which I recommend) here.

[5] See Gregory J. Werden’s “The Foundations of Antitrust.” Werden explains how the term “trust” had lost its original legal meaning and designated all kinds of agreements intended to restrict competition.

[6] Brian Albrecht, “Are All Mergers Inherently Anticompetitive?

[7] See, e.g., the “Efficiencies” section of the U.S. Justice Department and Federal Trade Commission’s Horizontal Merger Guidelines, which are currently under review.

[8] See Stephen Davies, Peter Ormosiz, and Martin Graffenberger, “Mergers After Cartels: How Markets React to Cartel Breakdown.”

[9] It is always useful to revisit, in this regard, Judge Frank Easterbrook’s classic 1984 piece “The Limits of Antitrust.”

[10] Brian Albrecht explains here why we cannot assume that monopoly profits will always be greater than duopoly profits.

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.

Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.

The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.

In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Times reports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.

One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:

That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.

It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.

“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”

This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.

Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?

It’s not a simple question to answer.

In comments to the FCC, the International Center for Law & Economics (ICLE) notes:

The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.

This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:

A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.

To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.

In a Feb. 14 column in the Wall Street Journal, Commissioner Christine Wilson announced her intent to resign her position on the Federal Trade Commission (FTC). For those curious to know why, she beat you to the punch in the title and subtitle of her column: “Why I’m Resigning as an FTC Commissioner: Lina Khan’s disregard for the rule of law and due process make it impossible for me to continue serving.”

This is the seventh FTC roundup I’ve posted to Truth on the Market since joining the International Center for Law & Economics (ICLE) last September, having left the FTC at the end of August. Relentlessly astute readers of this column may have observed that I cited (and linked to) Commissioner Wilson’s dissents in five of my six previous efforts—actually, to three of them in my Nov. 4 post alone.

As anyone might guess, I’ve linked to Wilson’s dissents (and concurrences, etc.) for the same reason I’ve linked to other sources: I found them instructive in some significant regard. Priors and particular conclusions of law aside, I generally found Wilson’s statements to be well-grounded in established principles of antitrust law and economics. I cannot say the same about statements from the current majority.

Commission dissents are not merely the bases for blog posts or venues for venting. They can provide a valuable window into agency matters for lawmakers and, especially, for the courts. And I would suggest that they serve an important institutional role at the FTC, whatever one thinks of the merits of any specific matter. There’s really no point to having a five-member commission if all its votes are unanimous and all its opinions uniform. Moreover, establishing the realistic possibility of dissent can lend credence to those commission opinions that are unanimous. And even in these fractious times, there are such opinions.     

Wilson did not spring forth fully formed from the forehead of the U.S. Senate. She began her FTC career as a Georgetown student, serving as a law clerk in the Bureau of Competition; she returned some years later to serve as chief of staff to Chairman Tim Muris; and she returned again when confirmed as a commissioner in April 2018 (later sworn in in September 2018). In between stints at the FTC, she gained antitrust experience in private practice, both in law firms and as in-house counsel. I would suggest that her agency experience, combined with her work in the private sector, provided a firm foundation for the judgments required of a commissioner.

Daniel Kaufman, former acting director of the FTC’s Bureau of Consumer Protection, reflected on Wilson’s departure here. Personally, with apologies for the platitude, I would like to thank Commissioner Wilson for her service.  And, not incidentally, for her consistent support for agency staff.

Her three Democratic colleagues on the commission also thanked her for her service, if only collectively, and tersely: “While we often disagreed with Commissioner Wilson, we respect her devotion to her beliefs and are grateful for her public service. We wish her well in her next endeavor.” That was that. No doubt heartfelt. Wilson’s departure column was a stern rebuke to the Commission, so there’s that. But then, stern rebukes fly in all directions nowadays.

While I’ve never been a commissioner, I recall a far nicer and more collegial sendoff when I departed from my lowly staff position. Come to think of it, I had a nicer sendoff when I left a large D.C. law firm as a third-year associate bound for a teaching position, way back when.

So, what else is new?

In January, I noted that “the big news at the FTC is all about noncompetes”; that is, about the FTC’s proposed rule to ban the use of noncompetes more-or-less across the board The rule would cover all occupations and all income levels, with a narrow exception for the sale of the business in which the “employee” has at least a 25% ownership stake (why 25%?), and a brief nod to statutory limits on the commission’s regulatory authority with regard to nonprofits, common carriers, and some other entities.

Colleagues Brian Albrecht (and here), Alden Abbott, Gus Hurwitz, and Corbin K. Barthold also have had things to say about it. I suggested that there were legitimate reasons to be concerned about noncompetes in certain contexts—sometimes on antitrust grounds, and sometimes for other reasons. But certain contexts are far from all contexts, and a mixed and developing body of economic literature, coupled with limited FTC experience in the subject, did not militate in favor of nearly so sweeping a regulatory proposal. This is true even before we ask practical questions about staffing for enforcement or, say, whether the FTC Act conferred the requisite jurisdiction on the agency.

This is the first or second FTC competition rulemaking ever, depending on how one counts, and it is the first this century, in any case. Here’s administrative scholar Thomas Merrill on FTC competition rulemaking. Given the Supreme Court’s recent articulation of the major questions doctrine in West Virginia v. EPA, a more modest and bipartisan proposal might have been far more prudent. A bad turn at the court can lose more than the matter at hand. Comments are due March 20, by the way.

Now comes a missive from the House Judiciary Committee, along with multiple subcommittees, about the noncompete NPRM. The letter opens by stating that “The Proposed Rule exceeds its delegated authority and imposes a top-down one-size-fits-all approach that violates basic American principles of federalism and free markets.” And “[t]he Biden FTC’s proposed rule on non-compete clauses shows the radicalness of the so-called ‘hipster’ antitrust movement that values progressive outcomes over long-held legal and economic principles.”

Ouch. Other than that Mr. Jordan, how did you like the play?

There are several single-spaced pages on the “FTC’s power grab” before the letter gets to a specific, and substantial, formal document request in the service of congressional oversight. That does not stop the rulemaking process, but it does not bode well either.

Part of why this matters is that there’s still solid, empirically grounded, pro-consumer work that’s at risk. In my first Truth on the Market post, I applauded FTC staff comments urging New York State to reject a certificate of public advantage (COPA) application. As I noted there, COPAs are rent-seeking mechanisms chiefly aimed at insulating anticompetitive mergers (and sometimes conduct) from federal antitrust scrutiny. Commission and staff opposition to COPAs was developed across several administrations on well-established competition principles and a significant body of research regarding hospital consolidation, health care prices, and quality of care.

Office of Policy Planning (OPP) Director Elizabeth Wilkins has now announced that the parties in question have abandoned their proposed merger. Wilkins thanks the staff of OPP, the Bureau of Economics, and the Bureau of Competition for their work on the matter, and rightly so. There’s no new-fangled notion of Section 5 or mergers at play. The work has developed over decades and it’s the sort of work that should continue. Notwithstanding numerous (if not legion) departures, good and experienced staff and established methods remain, and ought not to be repudiated, much less put at risk.    

Oh, right, Meta/Within. On Jan. 31, U.S. District Court Judge Edward J. Davila denied FTC’s request for a preliminary injunction blocking Meta’s proposed acquisition of Within. On Feb. 9, the commission announced “that this matter in its entirety be and it hereby is withdrawn from adjudication, and that all proceedings before the Administrative Law Judge be and they hereby are stayed.”

So, what happened? Much ink has been spilled on the weakness of the FTC’s case, both within ToTM (you see what I did there?) and without. ToTM posts by Dirk Auer, Alden Abbott, Gus Hurwitz, Gus again, and I enjoyed no monopoly on skepticism. Ashley Gold called the case “a stretch”; Gary Shapiro, in Fortune, called it “laughable.” And as Gus had pointed out, even the New York Times seemed skeptical.

I won’t recapitulate the much-discussed case, but on the somewhat-less-discussed matter of the withdrawal, I’ll consider why the FTC announced that the matter “is withdrawn from adjudication, and that all proceedings before the Administrative Law Judge be and they hereby are stayed.” While the matter was not litigated to its conclusion in federal court, the substantial and workmanlike opinion denying the preliminary injunction made it clear that the FTC had lost on the facts under both of the theories of harm to potential competition that they’d advanced.

“Having reviewed and considered the objective evidence of Meta’s capabilities and incentives, the Court is not persuaded that this evidence establishes that it was ‘reasonably probable’ Meta would enter the relevant market.”

An appeal in the 9th U.S. Circuit Court of Appeals likely seemed fruitless. Stopping short of a final judgment, the FTC could have tried for a do-over in its internal administrative Part 3 process, and might have fared well before itself, but that would have demanded considerable additional resources in a case that, in the long run, was bound to be a loser. Bloomberg had previously reported that the commission voted to proceed with the case against the merger contra the staff’s recommendation. Here, the commission noted that “Complaint Counsel [the Commission’s own staff] has not registered any objection” to Meta’s motion to withdraw proceedings from adjudication.

There are novel approaches to antitrust. And there are the courts and the law. And, as noted above, many among the staff are well-versed in that law and experienced at investigations. You can’t always get what you want, but if you try sometimes, you get what you deserve.

Economists have long recognized that innovation is key to economic growth and vibrant competition. As an Organisation for Economic Co-operation and Development (OECD) report on innovation and growth explains, “innovative activity is the main driver of economic progress and well-being as well as a potential factor in meeting global challenges in domains such as the environment and health. . . . [I]nnovation performance is a crucial determinant of competitiveness and national progress.”

It follows that an economically rational antitrust policy should be highly attentive to innovation concerns. In a December 2020 OECD paper, David Teece and Nicolas Petit caution that antitrust today is “missing broad spectrum competition that delivers innovation, which in turn is the main driver of long term growth in capitalist economies.” Thus, the authors stress that “[i]t is about time to put substance behind economists’ and lawyers’ long time admonition to inject more dynamism in our analysis of competition. An antitrust renaissance, not a revolution, is long overdue.”

Accordingly, before the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) finalize their new draft merger guidelines, they would be well-advised to take heed of new research that “there is an important connection between merger activity and innovation.” This connection is described in a provocative new NERA Economic Consulting paper by Robert Kulick and Andrew Card titled “Mergers, Industries, and Innovation: Evidence from R&D Expenditures and Patent Applications.” As the executive summary explains (citation deleted):

For decades, there has been a broad consensus among policymakers, antitrust enforcers, and economists that most mergers pose little threat from an antitrust perspective and that mergers are generally procompetitive. However, over the past year, leadership at the FTC and DOJ has questioned whether mergers are, as a general matter, economically beneficial and asserted that mergers pose an active threat to innovation. The Agencies have also set the stage for a substantial increase in the scope of merger enforcement by focusing on new theories of anticompetitive harm such as elimination of potential competition from nascent competitors and the potential for cumulative anticompetitive harm from serial acquisitions. Despite the importance of the question of whether mergers have a positive or negative effect on industry-level innovation, there is very little empirical research on the subject. Thus, in this study, we investigate this question utilizing, what is to our knowledge, a never before used dataset combining industry-level merger data from the FTC/DOJ annual HSR reports with industry-level data from the NSF on R&D expenditure and patent applications. We find a strong positive and statistically significant relationship between merger activity and industry-level innovative activity. Over a three- to four-year cycle, a given merger is associated with an average increase in industry-level R&D expenditure of between $299 million and $436 million in R&D intensive industries. Extrapolating our results to the industry level implies that, on average, mergers are associated with an increase in R&D expenditure of between $9.27 billion and $13.52 billion per year in R&D intensive industries and an increase of between 1,430 and 3,035 utility patent applications per year. Furthermore, using a statistical technique developed by Nobel Laureate Clive Granger, we find that the direction of causality goes, to a substantial extent, directly from merger activity to increased R&D expenditure and patent applications. Based on these findings we draw the following key conclusions:

  • There is no evidence that mergers are generally associated with reduced innovation, nor do the results indicate that supposedly lax antitrust enforcement over the period from 2008 to 2020 diminished innovative activity. Indeed, R&D expenditure and patent applications increased substantially over the period studied, and this increase was directly linked to increases in merger activity.
  • In previous research, we found that “trends in industrial concentration do not provide a reliable basis for making inferences about the competitive effects of a proposed merger” as “trends in concentration may simply reflect temporary fluctuations which have no broader economic significance” or are “often a sign of increasing rather than decreasing market competition.” This study presents further evidence that previous consolidation in an industry or a “trend toward concentration” may reflect procompetitive responses to competitive pressures, and therefore should not play a role in merger review beyond that already embodied in the market-level concentration screens considered by the Agencies.
  • The Agencies should proceed cautiously in pursuing novel theories of anticompetitive harm; our findings are consistent with the prevailing consensus from the previous decades that there is an important connection between merger activity and innovation, and thus, a broad “anti-merger” policy, particularly one pursued in the absence of strong empirical evidence, has the potential to do serious harm by perversely inhibiting innovative activity.
  • Due to the link between mergers and innovative activity in R&D intensive industries where the potential for anticompetitive consequences can be resolved through remedies, relying on remedies rather than blocking transactions outright may encourage innovation while protecting consumers where there are legitimate competitive concerns about a particular transaction.
  • The potential for mergers to create procompetitive benefits should be taken seriously by policymakers, antitrust enforcers, courts, and academics and the Agencies should actively study the potential benefits, in addition to the costs, of mergers.

In short, the Kulick & Card paper lends valuable empirical support for an economics-based approach to merger analysis that fully takes into account innovation concerns. If the FTC and DOJ truly care about strengthening the American economy (consistent with “President Biden’s stated goals of renewing U.S. innovation and global competitiveness”—see, e.g., here and here), they should take heed in crafting new merger guidelines. An emphasis in the guidelines on avoiding interference with merger-related innovation (taking into account research by such scholars as Kulick, Card, Teece, and Petit) would demonstrate that the antitrust agencies are fully behind President Joe Biden’s plans to promote an innovative economy.