Archives For Truth on the Market

The Federal Trade Commission (FTC) might soon be charging rent to Meta Inc. The commission earlier this week issued (bear with me) an “Order to Show Cause why the Commission should not modify its Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (July 27, 2012), as modified by Order Modifying Prior Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (Apr. 27, 2020).”

It’s an odd one (I’ll get to that) and the third distinct Meta matter for the FTC in 2023.

Recall that the FTC and Meta faced off in federal court earlier this year, as the commission sought a preliminary injunction to block the company’s acquisition of virtual-reality studio Within Unlimited. As I wrote in a prior post, U.S. District Court Judge Edward J. Davila denied the FTC’s request in late January. Davila’s order was about more than just the injunction: it was predicated on the finding that the FTC was not likely to prevail in its antitrust case. That was not entirely surprising outside FTC HQ (perhaps not inside either), as I was but one in a long line of observers who had found the FTC’s case to be weak.

No matter for the not-yet-proposed FTC Bureau of Let’s-Sue-Meta, as there’s another FTC antitrust matter pending: the commission also seeks to unwind Facebook’s 2012 acquisition of Instagram and its 2014 acquisition of WhatsApp, even though the FTC reviewed both mergers at the time and allowed them to proceed. Apparently, antitrust apples are never too old for another bite. The FTC’s initial case seeking to unwind the earlier deals was dismissed, but its amended complaint has survived, and the case remains to be heard.

Back to the modification of the 2020 consent order, which famously set a record for privacy remedies: $5 billion, plus substantial behavioral remedies to run for 20 years (with the monetary penalty exceeding the EU’s highest by an order of magnitude). Then-Chair Joe Simons and then-Commissioners Noah Phillips and Christine Wilson accurately claimed that the settlement was “unprecedented, both in terms of the magnitude of the civil penalty and the scope of the conduct relief.” Two commissioners—Rebecca Slaughter and Rohit Chopra—dissented: they thought the unprecedented remedies inadequate.

I commend Chopra’s dissent, if only as an oddity. He rightly pointed out that the commissioners’ analysis of the penalty was “not empirically well grounded.” At no time did the commission produce an estimate of the magnitude of consumer harm, if any, underlying the record-breaking penalty. It never claimed to.

That’s odd enough. But then Chopra opined that “a rigorous analysis of unjust enrichment alone—which, notably, the Commission can seek without the assistance of the Attorney General—would likely yield a figure well above $5 billion.” That subjective likelihood also seemed to lack an empirical basis; certainly, Chopra provided none.

By all accounts, then, the remedies appeared to be wholly untethered from the magnitude of consumer harm wrought by the alleged violations. To be clear, I’m not disputing that Facebook violated the 2012 order, such that a 2019 complaint was warranted, even if I wonder now, as I wondered then, how a remedy that had nothing to do with the magnitude of harm could be an efficient one. 

Now, Commissioner Alvaro Bedoya has issued a statement correctly acknowledging that “[t]here are limits to the Commission’s order modification authority.” Specifically, the commission must “identify a nexus between the original order, the intervening violations, and the modified order.” Bedoya wrote that he has “concerns about whether such a nexus exists” for one of the proposed modifications. He still voted to go ahead with the proposal, as did Slaughter and Chair Lina Khan, voicing no concerns at all.

It’s odder, still. In its heavily redacted order, the commission appears to ground its proposal in conduct alleged to have occurred before the 2020 order that it now seeks to modify. There are no intervening violations there. For example:

From December 2017 to July 2019, Respondent also made misrepresentations relating to its Messenger Kids (“MK”) product, a free messaging and video calling application “specifically intended for users under the age of 13.”

. . . [Facebook] represented that MK users could communicate in MK with only parent-approved contacts. However, [Facebook] made coding errors that resulted in children participating in group text chats and group video calls with unapproved contacts under certain circumstances.

Perhaps, but what circumstances? According to Meta (and the FTC), Meta discovered, corrected, and reported the coding errors to the FTC in 2019. Of course, Meta is bound to comply with the 2020 Consent Order. But were they bound to do so in 2019? They’ve always been subject to the FTC’s “unfair and deceptive acts and practices” (UDAP) authority, but why allege 2019 violations now?

What harm is being remedied? On the one hand, there seems to have been an inaccurate statement about something parents might care about: a representation that users could communicate in Messenger Kids only with parent-approved contacts. On the other hand, there’s no allegation that such communications (with approved contacts of the approved contacts) led to any harm to the kids themselves.

Given all of that, why does the commission seek to impose substantial new requirements on Meta? For example, the commission now seeks restrictions on Meta:

…collecting, using, selling, licensing, transferring, sharing, disclosing, or otherwise benefitting from Covered Information collected from Youth Users for the purposes of developing, training, refining, improving, or otherwise benefitting Algorithms or models; serving targeted advertising, or enriching Respondent’s data on Youth users.

There’s more, but that’s enough to have “concerns about” the existence of a nexus between the since-remedied coding errors and the proposed “modification.” Or to put it another way, I wonder what one has to do with the other.

The only violation alleged to have occurred after the 2020 consent order was finalized has to do with the initial 2021 report of the assessor—an FTC-approved independent monitor of Facebook/Meta’s compliance—covering the period from October 25, 2020 to April 22, 2021. There, the assessor reported that:

 …the key foundational elements necessary for an effective [privacy] program are in place . . . [but] substantial additional work is required, and investments must be made, in order for the program to mature.

We don’t know what this amounts to. The initial assessment reported that the basic elements of the firm’s “comprehensive privacy program” were in place, but that substantial work remained. Did progress lag expectations? What were the failings? Were consumers harmed? Did Facebook/Meta fail to address deficiencies identified in the report? If so, for how long? We’re not told a thing. 

Again, what’s the nexus? And why the requirement that Meta “delete Covered Information collected from a User as a Youth unless [Meta] obtains Affirmative Express Consent from the User within a reasonable time period, not to exceed six (6) months after the User’s eighteenth birthday”? That’s a worry, not because there’s nothing there, but because substantial additional costs are being imposed without any account of their nexus to consumer harm, supposing there is one.

Some might prefer such an opt-in policy—one of two that would be required under the proposed modification—but it’s not part of the 2020 consent agreement and it’s not otherwise part of U.S. law. It does resemble a requirement under the EU’s General Data Protection Regulation. But the GDPR is not U.S. law and there are good reasons for that— see, for example, here, here, here, and here.

For one thing, a required opt-in for all such information, in all the ways that it may live on in the firm’s data and models—can be onerous for users and not just the firm. Will young adults be spared concrete harms because of the requirement? It’s highly likely that they’ll have less access to information (and to less information), but highly unlikely that the reduction will be confined to that to which they (and their parents) would not consent. What will be the net effect?

Requirements “[p]rior to … introducing any new or modified products, services, or features” raise a question about the level of grain anticipated, given that limitations on the use of covered information apply to the training, refining, or improving of any algorithm or model, and that products, services, or features might be modified in various ways daily, or even in real time. Any such modifications require that the most recent independent assessment report find that all the many requirements of the mandated privacy program have been met. If not, then nothing new—including no modifications—is permitted until the assessor provides written confirmation that all material gaps and weaknesses have been “fully” remediated.

Is this supposed to entail independent oversight of every design decision involving information from youth users? Automated modifications? Or that everything come to a halt if any issues are reported? I gather that nobody—not even Meta—proposes to give the company carte blanche with youth information. But carte blanque?

As we’ve been discussing extensively at today’s International Center for Law & Economics event on congressional oversight of the commission, the FTC has a dual competition and consumer-protection enforcement mission. Efficient enforcement of the antitrust laws requires, among other things, that the costs of violations (including remedies) reflect the magnitude of consumer harm. That’s true for privacy, too. There’s no route to coherent—much less complementary—FTC-enforcement programs if consumer protection imposes costs that are wholly untethered from the harms it is supposed to address. 

The United Kingdom’s Competition and Markets Authority (CMA) late last month moved to block Microsoft’s proposed vertical acquisition of Activision Blizzard, a video-game developer that creates and publishes games such as Call of Duty, World of Warcraft, Diablo, and Overwatch. Microsoft summarized this transaction’s substantial benefits to video game players in its January 2022 press release announcing the proposed merger.

The CMA based its decision on speculative future harm in UK cloud-based gaming, neglecting the dramatic and far more likely dynamic competitive benefits the transaction would produce in gaming markets. The FTC announced its own challenge to the merger in December and has scheduled administrative hearings into the matter later in 2023.

If not overturned on appeal, the CMA’s decision is likely to reduce future consumer welfare and innovation in the gaming sector, to the detriment of producers and consumers.

Discussion

In its press release, the CMA stressed harm to future UK consumers of remote-server-based “cloud gaming” services as the basis for opposing the merger:

Microsoft has a strong position in cloud gaming services and the evidence available to the CMA showed that Microsoft would find it commercially beneficial to make Activision’s games exclusive to its own cloud gaming service.

Microsoft already accounts for an estimated 60-70% of global cloud gaming services and has other important strengths in cloud gaming from owning Xbox, the leading PC operating system (Windows) and a global cloud computing infrastructure (Azure and Xbox Cloud Gaming).

The deal would reinforce Microsoft’s advantage in the market by giving it control over important gaming content such as Call of Duty, Overwatch, and World of Warcraft. The evidence available to the CMA indicates that, absent the merger, Activision would start providing games via cloud platforms in the foreseeable future.

The CMA’s discussion ignores a number of salient facts regarding cloud gaming. Cloud gaming has not yet arrived as a major competitor to device-based gaming, as Dirk Auer points out (see also here regarding problems that have constrained the rapid emergence of cloud gaming). Google, for example, discontinued its Stadia cloud-gaming service just over three months ago, “after having failed to gain the traction that the company was expecting” (see here). Although cloud gaming does not require the purchase of specific gaming devices, it does require substantial bandwidth, stable internet connections, and subscriptions to particular services.

What’s more, Microsoft offered the CMA significant concessions to ensure that leading Activision games would remain available on other platforms for at least 10 years (see here, for example). The CMA itself acknowledged this in announcing its opposition to the merger, but rejected Microsoft’s proposals, stating:

Accepting Microsoft’s remedy would inevitably require some degree of regulatory oversight by the CMA. By contrast, preventing the merger would effectively allow market forces to continue to operate and shape the development of cloud gaming without this regulatory intervention.

Ironically, the real “regulatory intervention” that threatens to hinder market forces is the CMA’s blocking of this transaction, which (as a vertical merger) does not eliminate any direct competition and, to the contrary, promises to reinvigorate direct competition with Sony’s PlayStation. As Aurelien Portuese explains:

Sony is cheering on . . . attempt[s] to block Microsoft’s acquisition of Activision. Why? The proposed merger is a bid to offer a robust platform with high-quality games and provide resources for creators to produce more gaming innovation. That’s great for gamers, but threatening to Japanese industry titans Sony and Nintendo, because it would also create a company capable of competing with them more effectively.

If antitrust officials block the merger, they would be giving Sony and its 70 percent share of the global gaming console market the upper hand while preventing Microsoft and its 30 percent market share from effectively challenging the incumbent. That would be a complete reversal of competition policy.

The Japanese gaming industry dominates the world—and yet, U.S. antitrust officials may very well further cement this already decades-long dominance by blocking the Activision-Microsoft merger. Wielding antitrust to impose a twisted conception of domestic competition at the expense of global competitiveness must end, and the proposed Activision-Microsoft combination exemplifies why.

Furthermore, Portuese debunks the notion that Microsoft would have a future incentive to deny access to Activision’s high-selling Call of Duty franchise, reemphasizing the vigorous nature of gaming competition post-merger:

[T]he very idea that Microsoft would want to foreclose access to “Call of Duty” for PlayStation users is controversial. Microsoft would rationally have little incentive to reduce sales across platforms of a popular game. Moreover, Microsoft’s competitive position is weaker than the FTC seems to think: It faces competition from gaming industry incumbents such as Sony, Nintendo, and Epic Games, and from other large tech companies such as Apple, Amazon, Google, Tencent, and Meta.

In short, there are strong reasons to believe that gaming competition would be enhanced by the Microsoft-Activision merger. What’s more, the merger would likely generate efficiencies of integration, such as the promotion of cross-team collaboration (see here, for example). Notably, in announcing its decision to block the merger, even the CMA acknowledged “the benefit of having Activision’s content available on [Microsoft’s subscription service] Game Pass.” In contrast, theoretical concerns about merger-related potential threats to future cloud-gaming competition are uncertain and not well-grounded.

Conclusion

The CMA should not have blocked the merger. The agency’s opposition to this transaction reflects a blinkered focus on questionable possible future harm in a not-yet developed market, and a failure to properly weigh likely substantial near-term competitive benefits in a thriving existing market.

This is the sort of decision that tends to discourage future procompetitive efficiencies-generating high-tech acquisitions, to the detriment of producers and consumers.

The threat to future vertical mergers that bring together complementary assets to generate attractive new offerings for consumers in dynamically evolving market sectors is particularly unfortunate. Competition agencies should reflect on this reality and rethink their approaches. (FTC, are you paying attention?)

In the meantime, the UK court should carefully assess and, hopefully, side with Microsoft in its appeal of this unfortunate administrative ruling.

The 9th U.S. Circuit Court of Appeals ruled late last month on Epic Games’ appeal of the decision rendered in 2021 by the U.S. District Court for the Northern District of California in Epic Games v Apple, affirming in part and reversing in part the district court’s judgment.

In the original case, Epic had challenged as a violation of antitrust law Apple’s prohibition of third-party app stores and in-app-payment (IAP) systems from operating on its proprietary iOS platform. The district court ruled against Epic, finding that the company’s real concern was its own business interests in the face of Apple’s business model—in particular, the commission that Apple charges for use of its IAP system—rather than harm to consumers and to competition more broadly.

We at the International Center for Law & Economics filed an amicus brief in the case last year on behalf of ourselves and 26 distinguished law & economics scholars in which we highlighted two important issues we thought the court got right:

  1. The assessment of competitive harm in two-sided markets (which, in turn, hinges on the correct definition of the relevant market); and
  2. The assessment of less-restrictive alternatives.

While the 9th Circuit reached the right conclusion on the whole, it didn’t always follow the right path. The court’s understanding of anticompetitive harm in two-sided markets and the role of less-restrictive alternatives, in particular, raise more questions than they answer. Whereas the immediate result is a victory for Apple, some of the more contentious aspects of the 9th Circuit’s ruling could complicate future cases for digital platforms.

Relevant and Irrelevant Mistakes in Antitrust Market Definition

The circuit court found that District Judge Yvonne Gonzalez Rogers erred in defining the relevant market, but that the error did not undermine her conclusion. The appellate panel was not unanimous on this issue, which became the subject of a partial dissent by Judge Sidney R. Thomas. After all, didn’t Epic Games v. Apple hinge largely on the correct definition of the relevant market (see, for example, here)? How can such a seemingly crucial mistake not reverberate down to the rule-of-reason analysis and, ultimately, the outcome of the case?

As the 9th Circuit explained, however, not all mistakes in market definition are terminal. The majority wrote:

We agree that the district court erred in certain aspects of its market-definition analysis but conclude that those errors were harmless.

The mistake stemmed from the district court’s imposition of “a categorical rule that an antitrust market can never relate to a product that is licensed or sold.” But this should be read as mere dicta, not as dispositive of the case. Indeed, what the appellate court had an issue with here is the blanket statement of principle; not with how it reflected on the specific case at hand.

While the notion that antitrust markets never relate to products that are licensed or sold can—and does—lead to the rejection of Epic’s proposed relevant market (if Apple does not license or sell its iOS, by the district court’s own reasoning, iOS can’t be the relevant market), the crucial point is that Epic had failed to show that consumers were unaware that purchasing iOS devices “locks” them in, as precedent requires.

Indeed, where the plaintiffs make a single-brand aftermarket claim, it is up to them to rebut the economic presumption that consumers make a knowing choice to restrict their aftermarket options when they enter into a contract on the competitive market (see Kodak and Newcal). For the record, however, it has been argued that even when consumers are totally uninformed about aftermarket conditions when they purchase equipment, they pay a competitive-package price because competition forces manufacturers to offset later aftermarket price increases with initial equipment-price decreases.

 As the 9th Circuit explains:

Moreover, the district court’s finding on Kodak/Newcal’s consumer-unawareness requirement renders harmless its rejection of Epic’s proposed aftermarkets on the legally erroneous basis that Apple does not license or sell iOS as a standalone product. […] To establish its single-brand aftermarkets, Epic bore the burden of “rebut[ting] the economic presumption that . . . consumers make a knowing choice to restrict their aftermarket options when they decide in the initial (competitive) market to enter a . . . contract.” […] Yet the district court found that there was “no evidence in the record” that could support such a showing. As a result, Epic cannot establish its proposed aftermarkets on the record before our court—even after the district court’s erroneous reasoning is corrected.

Because Epic’s proposed aftermarkets fail, and because Apple did not cross-appeal the district court’s rejection of its proposed market (the market for all game transactions, whether on consoles, smartphones, computers, or elsewhere), the district court’s middle-ground market of mobile-games transactions therefore stands on appeal. And it is that market in which the 9th Circuit turns to assessing whether Apple’s conduct is unlawful pursuant to the Sherman Act.

A broader point, and one that is easy to miss at first glance, is that operating systems (OS) could be a valid relevant market before the 9th Circuit. The practical consequences of this are ambiguous. For a platform with a relatively small share of the OS market, like Apple, this might be, on balance, a good thing. But to the extent that defining an OS as a relevant market may be used to undergird a narrow aftermarket (such as Epic attempted to do in this case), it could also be seen as a negative.

Anticompetitive Harm: One-Sided Logic in Two-Sided Markets

The 9th Circuit rightly underscored that a showing of harm in two-sided markets must be marketwide. Regrettably, however, it failed to apply this crucial insight correctly.

As we argued in our amicus brief, Epic didn’t demonstrate that Apple’s app-distribution and IAP practices caused the significant marketwide anticompetitive effects that the U.S. Supreme Court in Amex deemed necessary in cases involving two-sided transaction markets (like Apple’s App Store). Epic instead narrowly focused only on harms to developers.

Two-sided markets connect distinct sets of users whose demands for the platform are interdependent—i.e., consumers’ demand for a platform increases as more products are available and product developers’ demand for a platform increases as additional consumers use the platform. Together, the two sides’ demands increase the overall potential for transactions. As a result of these complex dynamics, conduct that may appear anticompetitive when considering the effects on only one set of customers may be entirely consistent with—and actually promote—healthy competition when examining the effects on both sides.

The 9th Circuit makes essentially the same mistake here. It notes a supracompetitive 30% commission fee for IAPs and, echoing the district court, finds “some evidence” of those costs being passed on to consumers as sufficient to establish anticompetitive harm.

But this is woefully insufficient to show marketwide harm. As we noted in our brief, the full effects on other sides of the market may include reduced prices for devices, a greater range of features, or various other benefits. All such factors need to be considered when assessing whether and to what extent “the market as a whole” is harmed by seemingly restrictive conduct on one side of the market.

Furthermore, just because some developers pay higher IAP fees to Apple doesn’t mean that the total number of mobile-game transactions is lower than the counterfactual. Developers that pay the 30% IAP fee may cross-subsidize those that distribute apps for free, thereby increasing the total number of game transactions.

The 9th Circuit therefore misapplied Amex and perpetuated a mistaken approach to the analysis of anticompetitive harm in two-sided markets. In other words: even if one applies “one-sided logic in two-sided markets” and looks at the two sides of the market separately, Epic failed to demonstrate anticompetitive harm. Of course, the mistake is even more glaring if one applies, as one should, two-sided logic in two-sided markets.

Procompetitive Benefits: Two-Sided Logic in Two-Sided Markets

Highlighting its error, the two-sided logic that the 9th Circuit should have applied in step one of the rule-of-reason analysis—i.e., identifying anticompetitive harm—is then applied in step two. The court asserts:

Contrary to Epic’s contention, Apple’s procompetitive justifications do relate to the app-transactions market. Because use of the App Store requires an iOS device, there are two ways of increasing App Store output: (1) increasing the total number of iOS device users, and (2) increasing the average number of downloads and in-app purchases made by iOS device users. Below, the district court found that a large portion of consumers factored security and privacy into their decision to purchase an iOS device—increasing total iOS device users. It also found that Apple’s security- and privacy-related restrictions “provide a safe and trusted user experience on iOS, which encourages both users and developers to transact freely”—increasing the per-user average number of app transactions.

If that same holistic approach had been taken in step one, the 9th Circuit wouldn’t need to assess procompetitive justifications, because it wouldn’t have found anticompetitive harm to begin with.

This ties into a longstanding debate in antitrust; namely, whether it is proper to put the burden on the defendant to show procompetitive benefits of its conduct, or whether those benefits need to be accounted for by the plaintiff at step one in making out its prima facie case. The Amex “market as a whole” approach goes some way toward suggesting that the benefits need to be incorporated into the prima facie case, at least insofar as they occur elsewhere in the (properly defined) relevant market and may serve to undermine the claim of net harm.

Arguably, however, the conflict can be avoided by focusing on output in the relevant market—i.e., on the number of transactions. To the extent that lower prices elsewhere increase the number of gaming transactions by increasing the number of users or cross-subsidizing transactions, it may not matter exactly where the benefit occurs.

In this case, the fact of a 30% fee should have been deemed insufficient to make out a prima facie case if it was accompanied by an increase in the total number of transactions.    

Steps Three and Four of Rule of Reason: Right Outcome, Wrong Reasoning

As we have written previously, there is a longstanding question about the role and limits of less-restrictive alternatives (LRAs) under the rule of reason.

Epic’s appeal relied on theoretical LRAs to Apple’s business model to satisfy step three of the rule of reason. According to Epic, because the district court had identified some anticompetitive effects on one side of the market, and because alternative business models could, in theory, be implemented to achieve the same procompetitive benefits as Apple’s current business model, the court should have ruled in Epic’s favor.

There were and are several problems with this reasoning. For starters, LRAs can clearly only be relevant if competitive harm has been established. As discussed above, Epic failed to demonstrate marketwide harm, as required in cases involving two-sided markets. In my view, the 9th Circuit’s findings don’t fundamentally alter this because there is still no convincing evidence of marketwide anticompetitive harm that would justify moving onto step three (or step two, for that matter) of rule-of-reason analysis.

Second, while it is true that, following the Supreme Court’s recent Alston decision, LRA analysis may well be appropriate in some contexts to identify anticompetitive conduct in the face of procompetitive justifications, contrary to the 9th Circuit’s assertions, there is no holding in either the 9th Circuit or the Supreme Court requiring it in the context of two-sided markets (Amex refers to LRA analysis as constituting step three of the rule of reason, but because that case was resolved at step one, it must be viewed as mere dictum).

And for good reason. In the case of two-sided platforms, an LRA approach would inevitably require courts to second guess the particular allocation of costs, prices, and product attributes across platform users (see here).

Moreover, LRAs like the ones proposed by Epic, which are based on maximizing competitor effectiveness by “opening” an incumbent’s platform, would convert the rule of reason into a regulatory tool that may not promote competition at all. This general approach is antithetical to the role of antitrust law. That role is to act as a prophylactic against anticompetitive conduct that harms consumers, not to be a makeshift regulatory tool for redrawing business models (here).

Unfortunately, the 9th Circuit failed to grasp this. It accepted Epic’s base argument and didn’t dispute that an LRA analysis should be conducted. It instead found that, on the facts, Epic failed to propose viable LRAs to Apple’s restrictions. Even further, the 9th Circuit posited (albeit reluctantly) that where the plaintiff fails to show a LRA as part of a “third step” in rule of reason, a fourth step is required to weigh the procompetitive against anticompetitive effects.

But as the 9th Circuit itself notes, the Supreme Court’s most recent rulings—i.e., Alston and Amex—did not require a fourth step. Why would it? Cost-benefit analysis is already baked into the rule of reason. As the 9th Circuit recognizes:

We are skeptical of the wisdom of superimposing a totality-of-the-circumstances balancing step onto a three-part test that is already intended to assess a restraint’s overall effect. 

Further:

Several amici suggest that balancing is needed to pick out restrictions that have significant anticompetitive effects but only minimal procompetitive benefits. But the three-step framework is already designed to identify such an imbalance: A court is likely to find the purported benefits pretextual at step two, or step-three review will likely reveal the existence of viable LRAs.

It is therefore unclear what benefits a fourth step would offer as, in most cases, this would only serve to “briefly [confirm] the result suggested by a step-three failure: that a business practice without a less restrictive alternative is not, on balance, anticompetitive.”

The 9th Circuit’s logic here appears circular. If the necessity of step four is practically precluded by failure at step three, how can it also be that failure to show LRAs in step three requires a fourth step? If step four is triggered after failure at step three, but step four is essentially an abridged version of step three, then what is the point of step four?

This entanglement leads the 9th Circuit to the inevitable conclusion that the failure to conduct a fourth step is immaterial where courts have hitherto diligently assessed anticompetitive harms and procompetitive benefits (under any procedural label):

Even though it did not expressly reference step four, it stated that it “carefully considered the evidence in the record and . . . determined, based on the rule of reason,” that the distribution and IAP restrictions “have procompetitive effects that offset their anticompetitive effects” (emphasis added). This analysis satisfied the court’s obligation pursuant to County of Tuolumne, and the court’s failure to expressly give this analysis a step-four label was harmless.

Conclusion

The 9th Circuit found in favor of Apple on nine out of 10 counts, but it is not entirely clear that the case is a “resounding victory” for Apple. The finding that Judge Rogers’ mistakes in market definition were relevant is, essentially, a red herring (except for the possibility of OS being a relevant market before the 9th Circuit). The important parts of this ruling—and the ones which should give Apple and other digital platforms some pause—are to be found in the rule-of-reason analysis.

First, the 9th Circuit found evidence of anticompetitive harm in a two-sided market without marketwide harm, all the while recognizing, in theory, that marketwide harm is the relevant question in antitrust analysis of two-sided markets. This kind of one-sided logic is bound to result in an overestimation of competitive harm in two-sided markets.

Second, the 9th Circuit’s flawed understanding of LRAs and the need for a fourth step in rule-of-reason analysis could grant plaintiffs not one, but two last-ditch (and unjustified) attempts to make their case, even after having failed previous steps. Ironically, the 9th Circuit found that a fourth step was needed because the rule of reason is not a “rotary list” and that substance, not form, should be dispositive of whether conduct passes muster. But if the rule of reason is not a “rotary list,” why was the district court’s failure to undertake a fourth step seen as a mistake (even if, by the circuit court’s own admission, it was a harmless one)? Shouldn’t it be enough that the district court weighed the procompettive and anticompetitive effects correctly?

The 9th Circuit appears to fall into the same kind of formalistic thinking that it claims to eschew; namely, that LRAs are necessary in all markets (including two-sided ones) and that a fourth step is always necessary where step three fails, even if skipping it is often inconsequential. We will have to see how this affects future antitrust cases involving digital platforms.

In a May 3 op-ed in The New York Times, Federal Trade Commission (FTC) Chair Lina Khan declares that “We Must Regulate A.I. Here’s How.” I’m concerned after reading it that I missed both the regulatory issue and the “here’s how” part, although she does tell us that “enforcers and regulators must be vigilant.”

Indeed, enforcers should be vigilant in exercising their established authority, pace not-a-little controversy about the scope of the FTC’s authority. 

Most of the chair’s column reads like a parade of horribles. And there’s nothing wrong with identifying risks, even if not every worry represents a serious risk. As Descartes said—or, at least, sort of implied—feelings are never wrong, qua feelings. If one has a thought, it’s hard to deny that one is having it. 

To be clear, I can think of non-fanciful instantiations of the floats in Khan’s parade. Artificial intelligence (AI) could be used to commit fraud, which is and ought to be unlawful. Enforcers should be on the lookout for new forms of fraud, as well as new instances of it. Antitrust violations, likewise, may occur in the tech sector, just as they’ve been found in the hospital sector, electrical manufacturing, and air travel. 

Tech innovations entail costs as well as benefits, and we ought to be alert to both. But there’s a real point to parsing those harms from benefits—and the actual from the likely from the possible—if one seeks to identify and balance the tradeoffs entailed by conduct that may or may not cause harm on net.  

Doing so can be complicated. AI is not just ChatGPT; it’s not just systems that employ foundational large language learning models; and it’s not just systems that employ one or another form of machine learning. It’s not all (or chiefly) about fraud. The regulatory problem is not just what to do about AI but what to do about…what?

That is, what forms, applications, or consequences do we mean to address, and how and why? If some AI application costs me my job, is that a violation of the FTC Act? Some other law? Abstracting from my own preferences and inflated sense of self-importance, is it a federal issue? 

If one is to enforce the law or engage in regulation, there’s a real need to be specific about one’s subject matter, as well as what one plans to do about it, lest one throw out babies with bathwater. Which reminds me of parts of a famous (for certain people of a certain age) essay in 1970s computer science: Drew McDermott’s, “Artificial Intelligence Meets Natural Stupidity,” which is partly about oversimplification in characterizing AI.  

The cynic in me has three basic worries about Khan’s FTC, if not about AI generally:

  1. Vigilance is not so much a method as a state of mind (or part of a slogan, or a motto, sometimes put in Latin). It’s about being watchful.
  2. The commission’s current instantiation won’t stop at vigilance, and it won’t stick to established principles of antitrust and consumer-protection law, or to its established jurisdiction.
  3. Doing so without being clear on what counts as an actionable harm under Section 5 of the FTC Act risks considerable damage to innovation, and to the consumer benefits produced by such innovation. 

Perhaps I’m not being all that cynical, given the commission’s expansive new statement of enforcement principles regarding unfair methods of competition (UMC), not to mention the raft of new FTC regulatory proposals. For example, the Khan’s op-ed includes a link to the FTC’s proposed commercial surveillance and data security rulemaking, as Khan notes (without specifics) that “innovative services … came at a steep cost. What were initially conceived of as free services were monetized through extensive surveillance of people and businesses that used them.”

That reads like targeted advertising (as opposed to blanket advertising) engaged in cosplay as the Stasi:

I’ll never talk. 

Oh, yes, you’ll talk. You’ll talk or else we’ll charge you for some of your favorite media.

Ok, so maybe I’ll talk a little. 

Here again, it’s not that one couldn’t object to certain acquisitions or applications of consumer data (on some or another definition of “consumer data”). It’s that the concerns purported to motivate regulation read like a laundry list of myriad potential harms with barely a nod to the possibility—much less the fact—of benefits. Surveillance, we’re told in the FTC’s notice of proposed rulemaking, involves:

…the collection, aggregation, retention, analysis, transfer, or monetization of consumer data and the direct derivatives of that information. These data include both information that consumers actively provide—say, when they affirmatively register for a service or make a purchase—as well as personal identifiers and other information that companies collect, for example, when a consumer casually browses the web or opens an app.

That seems to encompass, roughly, anything one might do with data somehow connected to a consumer. For example, there’s the storage of information I voluntarily provide when registering for an airline’s rewards program, because I want the rewards miles. And there’s the information my physician collects, stores, and analyzes in treating me and maintaining medical records, including—but not limited to—things I tell the doctor because I want informed medical treatment.

Anyone might be concerned that personal medical information might be misused. It turns out that there are laws against various forms of misuse, but those laws are imperfect. But are all such practices really “surveillance”? Don’t many have some utility? Incidentally, don’t many consumers—as studies indicate—prefer arrangements whereby they can obtain “content” without a monetary payment? Should all such practices be regulated by the FTC without a new congressional charge, or allocated under a general prohibition of either UMC  or “unfair and deceptive acts or practices” (UDAP)? The commission is, incidentally, considering either or both as grounds. 

By statute, the FTC’s “unfairness” authority extends only to conduct that “causes or is likely to cause substantial injury to consumers which is not reasonably avoided by consumers themselves.” And it does not cover conduct where those costs are “outweighed by countervailing benefits to consumers or competition.” So which ones are those?

Chair Khan tells us that we have “an online economy where access to increasingly essential services is conditioned on widespread hoarding and sale of our personal data.”  “Essential” seems important, if unspecific. And “hoarding” seems bad, if undistinguished from legitimate collection and storage. It sounds as if Google’s servers are like a giant ball of aluminum foil distributed across many cluttered, if virtual, apartments. 

Khan breezily assures readers that the:

…FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly evolving A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.

But I wonder whether concerns about AI—both those well-founded and those fanciful—all fit under these rubrics. And there’s really no explanation for how the agency means to parse, say, unlawful mergers (under the Sherman and/or Clayton acts) from lawful ones, whether they are to do with AI or not.

We’re told that a “handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools.” Perhaps, but why link to a newspaper article about Google and Microsoft for “powerful businesses” without establishing any relevant violations of the law? And why link to an article about Google and Nvidia AI systems—which are not raw materials—in suggesting that some firms control “essential” raw materials (as inputs) to innovation, without any further explanation? Was there an antitrust violation? 

Maybe we already regulate AI in various ways. And maybe we should consider some new ones. But I’m stuck at the headline of Khan’s piece: Must we regulate further? If so, how? And not incidentally, why, and at what cost? 

Four prominent horsemen of the Biden administration’s bureaucratic apocalypse—the Federal Trade Commission (FTC), U.S. Justice Department (DOJ) Civil Rights Division (DOJ), Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC)—came together April 25 to issue a joint statement pledging vigorous enforcement against illegal activity perpetrated through the use of artificial intelligence (AI) and automated systems.

AI is, of course, very much in the news these days. And when AI is used to violate the law, it obviously is fully subject to enforcement scrutiny. But why make a big splash announcement merely to state a truism?

One suspects there is more to the story. The language of the joint statement, together with the FTC’s accompanying press release, provide some hints. Those hints point to a campaign by the administration to effectuate de facto bureaucratic regulation of AI through overly expansive interpretations of existing law. The following discussion will focus on the FTC’s role in this initiative.

Discussion

The FTC’s brief press release embodies a broad view of AI-related wrongdoing. It notes that the four agencies “pledged today to uphold America’s commitment to the core principles of fairness, equality, and justice” as emerging automated systems, including AI, “become increasingly common in our daily lives – impacting civil rights, fair competition, consumer protection, and equal opportunity.” The release adds that the agencies have “resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.”

The FTC’s references to ”fairness” and “fair competition” by implication allude to the fatally flawed November 2022 FTC Policy Statement on Unfair Methods of Competition (UMC). That policy statement has been roundly criticized (see the thoughtful critiques in the Truth on the Market symposium on the UMC statement) for rejecting the venerable consumer-welfare standard that had long guided FTC competition-enforcement policy, and replacing it with subjective notions of “unfair” conduct that could arbitrarily be invoked by the Commission to attack any conduct it found distasteful. (See then-Commissioner Christine Wilson’s dissenting statement.) Such an approach undermines the rule of law, ignores efficiencies, promotes uncertainty, and thereby harmfully interferes with welfare-promoting business conduct.

The specter of arbitrary FTC challenges to AI-related competitive practices that are misunderstood by the commission is obvious. Arbitrary legal attacks on AI practices on dubious subjective grounds could forestall a substantial amount of welfare-generating innovation in the AI space. This would reduce economic wealth creation and harm American technological progress in AI, in addition to weakening the U.S. efforts to prevent China from becoming dominant in this key realm (see here for a discussion of the U.S.-China AI rivalry).

The statement’s announcement that the agencies intend “to monitor the development and use of automated systems” is likewise troublesome. In the FTC’s case, it suggests a potential interest in deciding what forms of AI “development and use” are appropriate. Although rulemaking is not mentioned, the threat of litigation being brought by one or more of the agencies against certain disfavored AI implementations is real.

In particular, the threat of FTC UMC investigations and prosecutions could shape the nature of AI research by directing it away from innovations that the commission dislikes. This would be a form of “regulation by enforcement oversight” that could substantially slow progress in AI and thereby reduce economic growth.

The joint statement reinforces this problematic reading of the FTC’s press release. It stresses the FTC’s finding that:

AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.

The FTC, however, lacks a general statutory authority to combat “discrimination,” and its authority to attack forms of commercial surveillance likewise is highly dubious. The FTC’s proposed commercial surveillance and data security rulemaking, for example, flunks cost-benefit analysis and has other flaws that would prevent it from passing legal muster; see more here.

The notion that the FTC may challenge AI innovations it disfavors by bringing new questionable “discrimination” suits, and by concocting legally indefensible rule-based surveillance and data-security obligations, is a source of serious concern. As in the case of the UMC policy statement, the FTC would be taking novel actions beyond the scope of its congressionally granted authorities. Even if the courts eventually rejected such FTC initiatives, the costs reflected in foregone welfare-enhancing improvements in AI capabilities would be considerable.

The joint statement’s discussion of the CFPB, EEOC, and the DOJ Civil Rights Division less obviously supports the proposition that those agencies will be encouraged to act beyond their statutory mandates. It is notable, however, that various commentators have raised concerns about regulatory overreach by these three entities; with regard to the CFPB, for example, see here, here, and here.

Nevertheless, it is concerning that the administration would assign high priority to oversight of AI—an area of enormous technological and economic potential—to agencies that are concerned primarily with civil-rights issues and with consumer protection in the realm of financial services. The potential for regulatory mission creep that would harm American AI development and the dynamic competition it sparks is obvious.

Conclusion

The joint statement on AI and automated systems should be seen as a yellow (if not a red) warning flag that Biden administration efforts to micromanage AI development may be in the works. Particular attention should focus on the FTC, which has the potential to seriously undermine beneficial AI development through ill-conceived litigation and regulatory initiatives.

This is a serious matter. AI is of major consequence in the global political economy, particularly given China’s interest in the field. One can only hope that the FTC and the Biden administration will keep this sober reality in mind before they gin up new misguided forms of regulatory interference in the evolution of AI.

Output of the LG Research AI to the prompt: “artificial intelligence regulator”

It appears that the emergence of ChatGPT and other artificial-intelligence systems has complicated the European Union’s efforts to implement its AI Act, mostly by challenging its underlying assumptions. The proposed regulation seeks to govern a diverse and rapidly growing AI landscape. In reality, however, there is no single thing that can be called “AI.” Instead, the category comprises various software tools that employ different methods to achieve different objectives. The EU’s attempt to cover such a disparate array of subjects under a common regulatory framework is likely to be ill-fitted to achieve its intended goals.

Overview of the AI Act

As proposed by the European Commission, the AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights. The proposal defines AI systems broadly to include any software that uses machine learning, and sorts them into three risk levels: unacceptable, high, and limited risk. Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments. Limited-risk systems face certain requirements specifically related to adequate documentation and transparency.

As my colleague Mikolaj Barczentewicz has pointed out, however, the AI Act remains fundamentally flawed. By defining AI so broadly, the act would apply even to ordinary general-purpose software, let alone to software that uses machine learning but does not pose significant risks. The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of objectively harmless software.

Understanding Regulatory Overaggregation

Regulatory overaggregation—that is, the grouping of a huge number of disparate and only nominally related subjects under a single regulatory regime embodied by an abstract concept—is not a new issue. We can see evidence of it in the EU’s previous attempts to use the General Data Protection Regulation (GDPR) to oversee the vast domain of “privacy.”

“Privacy” is a capacious concept that includes, for instance, both the creeped-out feelings that certain individual users may feel in response to being tracked by adtech software, as well as potential violations of individuals’ expectations of privacy in location data when cell providers sell data to bounty hunters. In truth, what we consider “privacy” comprises numerous distinct problem domains better defined and regulated according to the specific harms they pose, rather than under one all-encompassing regulatory umbrella.

Similarly, “AI” regulation faces the challenge of addressing various mostly unrelated concerns, from discriminatory bias in lending or hiring to intellectual-property usage to opaque algorithms employed for fraudulent or harmful purposes. Overaggregated regulation, like the AI Act, results in a framework that is both overinclusive (creating unnecessary burdens on individuals and businesses) and underinclusive (failing to address potential harms in its ostensible area of focus, due to its overly broad scope).

In other words, as noted by Kai Zenner, an aide to Minister of European Parliament Axel Voss, the AI Act is obsessed with risks to the detriment of innovation:

This overaggregation is likely to hinder the AI Act’s ability to effectively address the unique challenges and risks associated with the different types of technology that constitute AI systems. As AI continues to evolve rapidly and to diversify in its applications, a one-size-fits-all approach may prove inadequate to the specific needs and concerns of different sectors and technologies. At the same time, the regulation’s overly broad scope threatens to chill innovation by causing firms to second guess whether they should use algorithmic tools. 

Disaggregating Regulation and Developing a Proper Focus

The AI landscape is complex and constantly changing. Its systems include various applications across industries like health care, finance, entertainment, and security. As such, a regulatory framework to address AI must be flexible and adaptive, capable of accommodating the wide array of AI technologies and use cases.

More importantly, regulatory frameworks in general should focus on addressing harms, rather than the technology itself. If, for example, bias is suspected in hiring practices—whether facilitated through an AI algorithm or a simple Excel spreadsheet—that issue should be dealt with as a matter of labor law. If labor law or any other class of laws fails to account for the negative use of algorithmic tools, they should be updated accordingly.

Similar nuance should be applied in areas like intellectual property, criminal law, housing, fraud, and so forth. It’s the illicit behavior of bad actors that we want the law to capture, not to adopt a universal position on a particular software tool that might be used differently in different contexts.

To reiterate: it is the harm that matters, and regulations should be designed to address known or highly likely harms in well-defined areas. Creating an overarching AI regulation that attempts to address every conceivable harm at the root technological level is impractical, and could be a burdensome hindrance on innovation. Before it makes a mistake, the EU should reconsider the AI Act and adopt a more targeted approach to the harms that AI technologies may pose.

An unofficial version of the EU’s anticipated regulatory proposal on standard essential patents (SEPs), along with a related impact assessment, was leaked earlier this month, generating reactions that range from disquiet to disbelief (but mostly disbelief).

Our friend Igor Nikolic wrote about it here on Truth on the Market, and we share his his concern that:

As it currently stands, it appears the regulation will significantly increase costs to the most innovative companies that participate in multiple standardization activities. It would, for instance, regulate technology prices, limit the enforcement of patent rights, and introduce new avenues for further delays in SEP-licensing negotiations.

It also might harm the EU’s innovativeness on the global stage and set precedents for other countries to regulate, possibly jeopardizing how the entire international technical-standardization system functions.

Dubious Premises

The regulation originates from last year’s call by the European Commission to establish principles and implement measures that will foster a “balanced,” “smooth,” and “predictable” framework for SEP licensing. With this in mind, the reform aims “to promote an efficient and sustainable SEP licensing ecosystem, where the interests of both SEP holders and implementers are considered” [emphasis added]. As explicitly mentioned in the call, the main problems affecting the SEP ecosystem are holdup, holdout, and forum shopping. 

Unfortunately, it is far from clear these premises are correct or that they justify the sort of regulation the Commission now contemplates. 

The draft regulation purports to fix a broken regime by promoting efficient licensing and ensuring a fair balance between the interests of patent holders and implementers, in order to mitigate the risks of both holdup and holdout as requested in well-established case law and, in particular, by the Court of Justice’s (CJEU) landmark Huawei v. ZTE case.

There is, however, scant evidence that the current SEP-licensing regime is inefficient or unbalanced. The best evidence is that SEP-reliant industries are no less efficient than other innovative industries. Likewise, SEP holders do not appear to be capturing the lion’s share of profits in the industries where they operate. In short, it’s not clear that there is any problem to solve in the first place.

There is also scant evidence that the Commission has taken account of hugely important geopolitical considerations. Policymakers are worried that Chinese companies (with the support of Chinese courts and authorities) may use  litigation strategies to obtain significantly lower “fair, reasonable, and non-discriminatory” (FRAND) rates.

Indeed, the EU filed a case against China at the World Trade Organization (WTO) last year that complained about the strategic use of anti-suit injunctions (ASIs)—that is, orders restraining a party either from pursuing foreign proceedings or enforcing a judgment obtained in foreign proceedings. As explained in a recent paper, this trend could have severe economic repercussions, given that the smooth implementation of mobile-telecommunication standards is crucial to the economic potential of both the so-called “Internet of Things” and U.S. communications infrastructure writ large.

By disproportionately targeting inventors (as we argue below), the draft regulation penalizes precisely those companies that, from the perspective of geopolitics, it should be protecting (or, at least, not undermining). Indeed, as the Commission’s impact assessment warns, the share of SEPs owned by Chinese companies has increased dramatically in recent years. Penalizing European inventors will only exacerbate this trend.

Missing the Mark

Given the importance of achieving a balance between holdup and holdout, as well as avoiding steps that could reinforce China’s position on the geopolitical map, the leaked version of the forthcoming EU regulation is deeply concerning, to say the least. 

Rather than wrestling with these complex issues, the proposal essentially focuses on ensuring that implementers receive licenses at affordable royalty rates. In other words, it would create significant red tape and compliance costs in an attempt to address an issue that is mostly peripheral to the stated aims, and arguably already dealt with by EU courts in Huawei v. ZTE. That decision, notably, forces parties to negotiate royalties in good faith before they can pursue judicial remedies, such as ASIs. 

Critically, the proposal surmises that there is currently little transparency regarding the aggregate royalties that implementers pay for all the SEPs that underpin a standard. The proposal assumes that making this information public would enable implementers to make better determinations when they negotiate royalties. 

To address this, the proposal creates several mandatory procedures that ultimately serve to make information on total royalty burdens public. It also creates a procedure that parties can use to obtain nonbinding FRAND royalty determinations from third-party arbitrators. More precisely, if contributors do not agree on an aggregate royalty sufficiently early before components and products implementing the standardized technology are put on the market, implementers and/or contributors can ask the EU Intellectual Property Office (EUIPO) to appoint conciliators with recommending an aggregate royalty (with exceedingly limited ability to appeal such decisions).

The proposal has at least two important drawbacks. 

To start, it is unclear what a nonbinding royalty recommendation would achieve. On the one hand, backers might hope the nonbinding recommendations will, de facto, be transposed by national courts when they rule on FRAND disputes. This may well be correct, but it is far from ideal. One of the great strengths of the current system is that courts in different jurisdictions compete to become the forum of choice for royalty disputes. In doing so, they constantly refine the way they rule on such disputes. Replacing this emergent equilibrium with a one-size-fits-all approach would be a great loss. 

Conversely, it’s plausible that national courts will continue to go about their daily business, largely ignoring the EUIPO royalty recommendations. If that were the case, one could legitimately ask what a lengthy and costly system of nonbinding royalty determinations really achieves. Whatever the case, the draft regulation offers little vision as to how its planned royalty determinations will improve actual outcomes.

A second important issue is that, in its current form, the proposal seems myopically focused on prices. This is a problem because licensing negotiations involve a much broader range of terms. Such considerations as available remedies and penalties, license-termination conditions, cross-licensing, and jurisdiction are often just as important as price. 

Not only are these issues conspicuously absent from the draft regulation, but properly accounting for them would largely undermine the regulation’s price-comparison mechanism, as this heterogeneity shows such comparisons are essentially apples to oranges.

Along similar lines, the draft regulation also includes a system of sampling to determine whether patents are truly essential to the underlying standard. These checks would be conducted by independent evaluators selected according to criteria to be determined by the Commission, and based on a methodology to be developed by the Commission, to ensure that the sample can produce statistically valid results. 

It’s unclear how much such a mechanism would enhance the status quo. Moreover, according to the proposal, the results of these essentiality checks would also not be legally binding. Rather than enhancing SEP-licensing negotiations and safeguarding the effectiveness of essentiality checks, this solution would just exacerbate holdout concerns. Indeed, implementers may use the process to delay negotiations or avoid payment of royalties while the process is ongoing.

Geopolitical Considerations

The Commission’s proposal also sends all the wrong signals internationally. In turn, this may undermine the geopolitical interests of both the EU and the United States.

By signaling its willingness to more closely interfere with the royalty rates agreed between inventors and implementers—even for patents registered outside the EU—the EU is effectively inviting other jurisdictions to do the same (or legitimizing ongoing efforts to do so). 

This is far from ideal. For instance, Chinese government officials and courts have increasingly sought to influence and rule on global FRAND disputes, generally in ways that favor its own firms, which are largely on the implementer side of disputes. The EU’s proposal sends a strong signal that it is fair game for government agencies to more directly influence global FRAND royalty rates, as well as seeking to override the decisions of foreign courts.

(The EU isn’t alone in this. Along similar lines, the proposed Standard Essential Royalty Act (SERA) would override foreign FRAND determinations when U.S. patents are involved.) 

In short, the EU’s draft regulation will embolden foreign jurisdictions to respond in kind and seek further authority over the royalty rates agreed upon by private parties. Ultimately, this will infuse the SEP-licensing space with politized oversight and vindicate China’s moves to depress the value of the West’s intellectual property, thus giving its state-backed rivals a leg up. At a time when geopolitical tensions between China and the West are historically high, such a move seems particularly ill-advised.

Conclusion

In sum, rather than strike a balance between patent owners’ and implementers’ interests, the EU proposal is one-sided. It only introduces burdens on SEP holders and disregards the significant risks of holdout strategies. Such a framework for SEP licensing would be at odds with the framework crafted by the CJEU in Huawei

Further, it would also undermine the value of many SEPs in ways that would be particularly appreciated by Chinese policymakers. The consequences of such an approach would be disruptive for entire SEP-reliant industries, and for the EU’s economic interests.

More, and not just about noncompetes, but first, yes (mea culpa/s’lach lanu), more about noncompetes.

Yesterday on Truth on the Market, I provided an overview of comments filed by the International Center for Law & Economics on the Federal Trade Commission’s (FTC) proposed noncompete rule. In addition to ICLE’s Geoffrey Manne, Dirk Auer, Brian Albrecht, Gus Hurwitz, and myself, we were joined in our comments by 25 other leading academics and former agency officials, including former chief economists at the U.S. Justice Department’s (DOJ) Antitrust Division and a former director of the FTC’s Bureau of Economics.

Not to beat a dead horse, but this is important, as it’s the FTC’s second-ever attempt to promulgate a competition rule under a supposed general rulemaking authority, and the first since the unenforced and long-ago rescinded rule on the Men’s and Boys’ Tailored Clothing Industry, initially adopted in 1967. Not incidentally, this would be a foray into regulation of the terms of labor agreements across the entire economy, on questionable authority (and certainly no express charge from Congress).

I’d also like to highlight some other comments of interest. The Global Antitrust Institute submitted a very thorough critique covering both the economic literature and fundamental issues of antitrust law, as did the Mercatus Center. Washington Legal Foundation covered constitutional and jurisdictional questions, as did comments from TechFreedom. Another set of comments from TechFreedom suggested that the FTC might consider regulating some noncompetes under its Magnusson-Moss Act consumer-protection rulemaking authority, at least after development of an appropriate record.

Asheesh Agarwal submitted comments reviewing legal concerns and risks to the FTC’s authority on behalf of a number of FTC alumni, including, among others, two former directors of the FTC’s Bureau of Economics; two former FTC general counsels; a former director of the FTC’s Office of Policy Planning; a former FTC chief technologist; a former acting director of the FTC’s Bureau of Consumer Protection; and me.

American Bar Association comments that critique the use of noncompetes for low-wage workers but stop short of advocating FTC regulation are here. For an academic pro-regulatory perspective, there were comments submitted by professors Mark Lemley and Orly Lobel.

For additional Truth on the Market posts on the rulemaking, I’d point to those by Alden Abbott, Brian Albrecht (and here), Corbin Barthold, Gus Hurwitz, Richard Pierce Jr., and yours truly. Also, a Wall Street Journal op-ed by Eugene Scalia and Svetlana Gans.

That’s a lot, I know, but these really do explore different issues, and there really are quite a few of them. No lie.

Bringing the Axon Down

As a reward for your patience—or your ability to skip ahead—now for the week’s other hot issue: the U.S. Supreme Court’s decision in Axon Enterprise Inc. v. FTC, which represented a 9-0 loss for the commission (and for the U.S. Securities and Exchange Commission). Does anybody remember the days—not so long ago, if not under current leadership—when the commission would win unanimous court decisions? Phoebe Putney, anyone?

A Bloomberg Law overview of Axon quoting my ICLE colleague Gus Hurwitz is here.

The issue in Axon might seem a narrow one at the intersection of administrative and constitutional law, but bear with me. Enforcement of the FTC Act and the SEC Act often follow a familiar pattern: an agency brings a complaint that, if not settled, may be heard by an administrative law judge (ALJ) in a hearing inside the agency itself. In the case of the FTC, a decision by the ALJ can be appealed to the commission itself. Thus, if the commission does not like the ALJ’s decision, it can appeal to itself.

As a general matter, once embroiled in such “agency process,” a defendant must “exhaust” the administrative process before challenging the complaint (or appealing an ALJ or commission decision) in federal court. That’s known as the Doctrine of Exhaustion of Administrative Remedies (see, e.g., McKart v. United States). The doctrine helps to conserve judicial resources, as the courts do not have to consider every challenge (including procedural ones) that arises in the course of administrative enforcement.

The disadvantage, for defendants, is that they may face a long and costly process of agency adjudication before they ever get before a federal judge (some FTC Act complaints initially are brought in federal court, but set that aside). That can exert substantial pressure to settle, even when defendants think the government’s case is a weak one.

At issue in Axon, was the question of whether a defendant had to exhaust agency process on the merits of an agency complaint before bringing a constitutional challenge to the agency’s enforcement action. The agencies said yes, natch. The unanimous Supreme Court said no.

To put the question differently, do the federal district courts have jurisdiction to hear and resolve defendants’ constitutional challenges independent of exhaustion? “The answer is yes,” said the Supreme Court of the United States. According to the court—and reasonably—the agencies don’t have any special expertise on such constitutional questions, even if they have expertise in, say, competition or securities policy. On fundamental constitutional questions, defendants can get their day in court without exhausting agency process.

So, what difference does that make? That remains to be seen, but perhaps more than it might seem. On the one hand, the Axon decision did not repudiate the FTC’s substantive expertise in antitrust (or consumer protection) or its authority to enforce the FTC Act. On the other hand, enforcement is costly for enforcers, and not just defendants, and the FTC is famously—as evidenced by its own recent pleas to Congress for more funding—resource-constrained, to an extent that is said to impair its ability to enforce the FTC Act.

As I noted yesterday, earlier this week, the commission testified that:

While we constantly strive to enforce the law to the best of our capabilities, there is no doubt that—despite the much-needed increased appropriations Congress has provided in recent years—we continue to lack sufficient funding.

The Axon decision means, among other things, that the FTC’s average litigation costs are bound to rise, as we’ll doubtless see more constitutional challenges.

But perhaps there’s more to it than that. At least two of the nine justices—Thomas, in a concurring opinion, and Gorsuch, concurring with the decision—signaled an appetite to further rein-in the agencies. And doing so would be part-and-parcel with a judicial trend against deference to administrative agencies. For example, in AMG Capital Management, the Supreme Court narrowly interpreted the commission’s power to obtain equitable remedies, and specifically monetary remedies, repudiating established commission practice. And in West Virginia v. EPA, the court demonstrated concern with the breadth of the administrative state; specifically, it rejected the proposition that courts defer to agency interpretations of vague grants of statutory authority, where such interpretations are of major economic and political import.

Where this will all end is anybody’s guess. In the near term, Axon will impose extra costs on the FTC. And the commission’s broader bid to extend its reach faces an uphill battle. 

As I noted in January, the Federal Trade Commission’s (FTC) proposal to ban nearly all noncompete agreements raises many questions. To be sure, there are contexts—perhaps many contexts—in which noncompete agreements raise legitimate policy concerns. But there also are contexts in which they can serve a useful procompetitive function. A per se ban across all industries and occupations, as the FTC’s notice of proposed rulemaking (NPRM) contemplates, seems at the least overly broad, and potentially a dubious and costly policy initiative. 

Yesterday was the deadline to submit comments on the noncompete NPRM, and the International Center for Law & Economics and 30 distinguished scholars of law & economics—including leading academics and past FTC officials—did just that. I commend the comments to you, and not just because I drafted a good portion of them.

Still, given that we had about 75 pages of things to say about the proposal, an abridged treatment may be in order. The bottom line:

[W]e cannot recommend that the Commission adopt the proposed Non-compete Clause Rule (‘Proposed Rule’). It is not supported by the Commission’s experience, authority, or resources; neither is it supported by the evidence—empirical and otherwise—that is reviewed in the NPRM.    

In no particular order, I will summarize some of our comments on key issues.

Not All Policy Concerns Are Antitrust Concerns

As the NPRM acknowledges, litigation over noncompetes focuses mostly on state labor and contract-law issues. And the federal and state cases that do consider specific noncompetes under the antitrust laws have nearly all found them to be lawful.

That’s not to say that there cannot be specific noncompetes in specific labor markets that run afoul of the Sherman Act (or the FTC Act). But antitrust is not a Swiss Army Knife, and it shouldn’t be twisted to respond to every possible policy concern.

Will Firms Invest Less in Employees?

While the NPRM amply catalogs potential problems associated with non-competes, [non-competes], like other vertical restrictions in labor agreements, are not necessarily inefficient, anticompetitive, or harmful to either labor or consumer welfare; they can be efficiency-enhancing and pro-competitive . . . [and] can solve a range of potential hold-up problems in labor contracting.

For example, there are circumstances in which both firms and their employees might benefit from additional employee training. But employees may lack the resources needed to acquire the right training on their own. Their employers might be better resourced, but might worry about their returns on investments in employee training.

Labor is alienable; that is, employees can walk out the door, and they can do so before firm-sponsored training has paid adequate dividends. Hence, they might renegotiate their compensation before it has paid for itself; or they might bring their enhanced skills to a competing firm. Knowing this, firms might tend to underinvest in employee training, which would lower their productivity. Noncompetes can mitigate this hold-up problem, and there is empirical evidence that they do just that.

The Available Literature Is Decidedly Mixed

A per se ban under the antitrust laws would seem to require considerable case law and a settled, and relatively comprehensive body of literature demonstrating that noncompetes pose significant harms to competition and consumers in nearly all cases. There isn’t.

First, “there appear to be numerous and broad gaps in the literature.” For example, most policy options, industries, and occupations haven’t been studied at all. And there’s only a single paper looking at downstream price effects in goods and services markets—one that doesn’t appear to be at all generalizable.

In addition, the available results don’t all impugn noncompetes; they’re mixed. For example, while some studies suggest certain classes of workers see increased wages, on average, when noncompete “enforceability” is reduced, others report contexts in which enforcement is associated with rising wages, depending on the occupation (there are studies of physicians, CEOs, and financial advisors) or even the timing with which workers are made aware of noncompetes.

It’s complicated. But as a 2019 working paper from the FTC’s own Bureau of Economics observed, the…

more credible empirical studies tend to be narrow in scope, focusing on a limited number of specific occupations . . . or potentially idiosyncratic policy changes with uncertain and hard-to-quantify generalizability.

So, for example, a study of the effects of an idiosyncratic statutory change regarding noncompetes in certain parts of the tech sector, but not others, in Hawaii (which doesn’t have much of a tech sector) might tell us rather little about our policy options more broadly.

Being the Primary Federal Labor Regulator Requires Resources

There are also reasons to question the FTC’s drive to be the federal regulator of noncompetes and other vertical restraints in labor agreements. For one thing, the commission has very little experience with noncompetes, although it did (rush to?) settle three complaints involving noncompetes the day before they issued the NPRM.

All three (plus a fourth settled since) involved very specific facts and circumstances. Three of the four were situated in a single industry: the glass-container industry. And, as recently resigned Commissioner Christine Wilson explained in dissent, the opinions and orders settling the matters did little to explain how the conduct at issue violated the antitrust laws. In one complaint, the alleged restrictions on security guards seemed excessive and unreasonable (as a state court found them to be, under state law), but that doesn’t mean that they violated the FTC Act.

Moreover, this would be a sweeping regulation involving, based on the commission’s own estimates, some 30 million current labor agreements and several hundred billion dollars in annual wage effects. Just this week, the commission once again testified to Congress that it lacks adequate personnel and other resources to execute the laws it plainly is charged to enforce already. So, for example:

 [w]hile we constantly strive to enforce the law to the best of our capabilities, there is no doubt that—despite the much-needed increased appropriations Congress has provided in recent years—we continue to lack sufficient funding.

Given these limitations, it’s hard to understand the pitch to regulate labor terms across the entire economy without any congressional charge to do so. And that’s leaving aside the FTC’s recent and problematic proposal to issue sweeping regulations for digital privacy, as well. Not incidentally, this is an active area of state policy reform, and an issue that’s currently before Congress. 

A Flimsy Basis for Authority

In the end, the FTC’s claimed authority to issue competition regulations under its general “unfair methods of competition” authority (Section 5 of the FTC Act) and a single clause about regulations (for some purpose) in Section 6(g) of the FTC Act is both contentious and dubious.

While it’s not baseless, administrative-law scholars doubt the FTC’s position, which rests on a dated opinion from the U.S. Circuit Court of Appeals for the D.C. Circuit that’s plainly out of step with recent Supreme Court decisions, which show less deference to agency authority (like the Axon decision just last week, or last year’s West Virginia v. EPA), as well as more general trends in statutory construction.  

All in all, the commission’s proposed rule would be a bridge too far—or several of them. The agency isn’t just risking the economic costs of a spectacularly overbroad rule and its own much-needed agency resources. Court challenges to such a rule are inevitable, and place both the substance of a noncompete rule and the FTC’s own authority at risk.  

Regulators around the globe are scrambling for a silver bullet to “tame” tech companies. Whether it’s the United States, the United Kingdom, Australia, South Africa, or Canada, the animating rationale behind such efforts is that firms like Google, Apple, Meta, and Amazon (GAMA) engage in undesirable market conduct that falls beyond the narrow purview of antitrust law (here and here).

To tackle these supposed ills, which range from exclusionary practices and disinformation to encroachments on privacy and democratic institutions, it is asserted that sweeping new ex ante rules must be enacted and the playing field tilted in favor of enforcement agencies, which have hitherto faced what advocates characterize as insurmountable procedural hurdles (here and here).

Amid these international calls for regulatory intervention, the EU’s Digital Markets Act (DMA) has been seen as a lodestar by advocates of more aggressive competition policy. Beyond addressing social anxieties about unchecked tech power, the DMA’s primary appeal is that it claims to strive for two goals with almost universal appeal: fairness and market contestability.

Unfortunately, the DMA is not the paragon of regulation that it is sometimes made out to be. Indeed, the law is structured less to forward any purportedly universal set of principles, but instead to align digital platforms’ business models with an idiosyncratic and specifically European industrial policy, rooted in politics and protectionism. As explained below, it is unlikely other countries would benefit from emulating this strategy.

The DMA’s Protectionist Origins

While the DMA is today often lauded as eminently pro-competition (here and here), prior to its adoption, many leading European politicians were touting the text as a protectionist industrial-policy tool that would hinder U.S. firms to the benefit of European rivals: a far cry from the purely consumer-centric tool it is sometimes made out to be. French Minister of the Economy Bruno Le Maire, for example, acknowledged as much in 2021 when he said:

Digital giants are not just nice companies with whom we need to cooperate, they are rivals, rivals of the states that do not respect our economic rules, which must therefore be regulated… There is no political sovereignty without technological sovereignty. You cannot claim sovereignty if your 5G networks are Chinese, if your satellites are American, if your launchers are Russian and if all the products are imported from outside.

This logic dovetails neatly with the EU’s broader push for “technology sovereignty,” a strategy intended to reduce the continent’s dependence on technologies that originate abroad. The strategy already has been institutionalized at different levels of EU digital and industrial policy (see here and here). In fact, the European Parliament’s 2020 Briefing on “Digital Sovereignty for Europe” explicitly anticipates that an ex ante regulatory regime similar to the DMA would be a central piece of that puzzle. French President Emmanuel Macron summarized it well when he said:

If we want technological sovereignty, we’ll have to adapt our competition law, which has perhaps been too much focused solely on the consumer and not enough on defending European champions.

Moreover, it can be argued that the DMA was never intended to promote European companies that could seriously challenge the dominance of U.S. firms (see here at 13:40-14:20). Rather, the goal was always to redistribute rents across the supply chain away from digital platforms and toward third parties and competitors (what is referred to as “business users,” as opposed to “end users”). After all, with the arguable exception of Spotify and Booking.com, the EU has none of the former, and plenty of the latter. Indeed, as Pablo Ibañez Colomo has written:

The driver of many disputes that may superficially be seen as relating to leveraging can be more rationalised, more convincingly, as attempts to re-allocate rents away from vertically-integrated incumbents to rivals.

Alternative Digital Strategies to the DMA

While the DMA strives to use universal language and has a clear ambition to set global standards, under this veneer of objectivity lies a very particular vision of industrial policy and a certain normative understanding of how rents should be allocated across the value chain. That vision is not apt for everyone and, indeed, may not be apt for anyone (see here). Other countries can certainly look to the EU for inspiration and, admittedly, it would be ludicrous to expect them to ignore what goes on in the bloc.

When deciding whether and what sort of legislation to enact, however, other countries should ultimately seek those approaches that are appropriate to their own context. What they ought not do is reflexively copy templates made with certain goals in mind, which they might not share and which may be diametrically opposed to their own interests or values. Below are some suggestions for alternative strategies to the DMA.

Doubling Down on Sound Competition Laws

Mounting evidence suggests that tech companies increasingly consider the costs of regulatory compliance in planning their business strategy. For example, Meta is reportedly considering shutting down political advertising in Europe to avoid the hassle of complying with the EU’s upcoming rules on online campaigning. Just this week, it was revealed that Twitter may be considering pulling out of the EU because it doesn’t have the capacity to comply with the Code of Practice on Disinformation, a “voluntary” agreement that the Digital Services Act (DSA) will nevertheless make binding.

While perhaps the EU—the world’s third largest economy—can afford to impose costly and burdensome regulation on digital companies because it has considerable leverage to ensure (with some, though as we have seen, by no means absolute, certainty) that they will not desert the European market, smaller economies that are unlikely to be seen by GAMA as essential markets are playing a different game.

Not only do they have much smaller carrots to dangle, but they also disproportionately benefit from the enormous infrastructural investments and consumer benefits brought by GAMA (see, for example, here and here). In this context, the wiser strategy for smaller, ostensibly “nonessential” markets might be to court GAMA, rather than to castigate it. Instead of imposing intricate, costly, and untested regulatory obligations on digital platforms, these countries may reasonably wish to emphasize or bolster the transparency, predictability, and procedural safeguards (including credible judicial review) of their competition-law systems. After all, to regulate competition, you must first attract it.

Indeed, while competition is as important in developing markets as developed ones, developing markets are especially dependent upon competition rules that encourage investment in infrastructure to facilitate economic growth and that offer a secure environment for ongoing innovation. Particularly for relatively young, rapidly evolving industries like digital markets, attracting consistent investment and industry know-how ensures that such markets can innovate and transition into maturity (here and here).

Moreover, the case-by-case approach of competition law allows enforcers to tackle harmful behavior while capturing digital platforms’ procompetitive benefits, rather than throwing the baby out with the bathwater by imposing blanket prohibitions. As Giuseppe Colangelo has suggested, the assumption that competition laws are insufficient to tackle anticompetitive conduct in digital markets is a questionable one, given that most of the DMA’s contemplated prohibitions have also been the object of separate antitrust suits in the EU.

Careful Consideration of Costs and Unintended Consequences

DMA-style ex ante regulation is still untested. Its benefits, if any, still remain mostly theoretical. A tradeoff between, say, foreign direct investment (FDI) and ex ante regulation might make sense for some emerging markets if it was clear what was being traded, and at what cost. Alas, such regulations are still in an incipient phase.

The U.S. antitrust bills targeting a handful of companies seem unlikely to be adopted soon; the UK’s Digital Markets Unit proposal has still not been put to Parliament; and Japan and South Korea have imposed codes of conduct only in narrow areas. Even the DMA—the most comprehensive legislative attempt to “rein in” digital companies—entered into force only last October, and it will not start imposing its obligations on gatekeepers until February or March 2024, at the earliest.

At the same time, there are a range of risks and possible unintended consequences associated with the DMA, such as the privacy dangers of sideloading and interoperability mandates; worsening product quality as a result of blanket bans on self-preferencing; decreased innovation; obstruction of the rule of law; and double and even triple jeopardy because of the overlaps between the DMA and EU competition rules. 

Despite the uncertainty inherent in deploying experimental regulation in a fast-moving market, the EU has clearly decided that these risks are not sufficient to offset the DMA’s benefits (see here for a critical appraisal). But other countries should not take their word for it.

In conducting an independent examination, they may place more value on some of the DMA’s expected negative consequences, or may find their likelihood of occurring to be unacceptably high. This could be due to endogenous or highly context-dependent factors. In some cases, the tradeoff could mean too large a sacrifice of FDI, while in others, the rules could impinge on legitimate policy priorities, like national security. In either case, countries should evaluate the risks and benefits of the ex ante regulation of digital platforms themselves, and go their own way.

Conclusion

There are, of course, other good reasons why the DMA shouldn’t be so readily emulated by everyone, everywhere, all at once.

Giving enforcers wide discretionary powers to reshape digital markets and override product-design decisions might not be a good idea in countries with a poor track record of keeping corruption in check, or where enforcers lack the required know-how to do so effectively. Simple norms, backed by the rule of law, may not be sufficient to counteract these background conditions. But they also may be preferable to the broad mandates and tools envisioned by the kinds of ex ante regulatory proposals currently in vogue.

Smaller countries with limited budgets would probably also benefit more from castigating unequivocally harmful (and widespread) conduct, like cartels (the “cancers of the market economy”), bid rigging, distortive state aid, and mergers that create actual monopolies (see, for example, here and here), rather than applying experimental regulation underpinned by tenuous theories of harm and indeterminate benefits .

In the end, the DMA has been mistakenly taken to be a panacea or a blueprint for how to regulate tech, when it is neither of these two things. It is, instead, a particularistic approach that may or may not achieve its stated goals. In any case, it must be understood as an outgrowth of a certain industrial-policy strategy and a sui generis vision of how digital markets should distribute rents (spoiler alert: in the interest of European companies).

[The following is a guest post from Igor Nikolic, a research fellow at the European University Institute.]

The European Commission is working on a legislative proposal that would regulate the licensing framework for standard-essential patents (SEPs). A regulatory proposal leaked to the press has already been the subject of extensive commentary (see here, here, and here). The proposed regulation apparently will include a complete overhaul of the current SEP-licensing system and will insert a new layer of bureaucracy in this area.

This post seeks to explain how the EU’s current standardization and licensing system works and to provide some preliminary thoughts on the proposed regulation’s potential impacts. As it currently stands, it appears the regulation will significantly increase costs to the most innovative companies that participate in multiple standardization activities. It would, for instance, regulate technology prices, limit the enforcement of patent rights, and introduce new avenues for further delays in SEP-licensing negotiations.

It also might harm the EU’s innovativeness on the global stage and set precedents for other countries to regulate, possibly jeopardizing how the entire international technical-standardization system functions. An open public discussion about the regulation’s contents might provide more time to think about the goals the EU wants to achieve on the global technology stage.

How the Current System Works

Modern technological standards are crucial for today’s digital economy. 5G and Wi-Fi standards, for example, enable connectivity between devices in various industries. 5G alone is projected to add up to €1 trillion to the European GDP and create up to 20 million jobs across all sectors of the economy between 2021 and 2025. These technical standards are typically developed collaboratively through standards-development organizations (SDOs) and include patented technology, called standard-essential patents (SEPs).

Companies working on the development of standards before SDOs are required to disclose patents they believe to be essential to a standard, and to commit to license such patents on fair, reasonable and non-discriminatory (FRAND) terms. For various reasons that are inherent to the system, there are far more disclosed patents that are potentially essential than there are patents that end up truly being essential for a standard. For example, one study calculated that there were 39,000 and 45,000 patents declared essential 3G UMTS and 4G LTE, respectively, while another estimated as many as 95,000 patent declarations for 5G. Commercial studies and litigated cases, however, provide a different picture. Only about 10% to 40%, respectively, of the disclosed patents were held to be truly essential for a standard.

The discrepancy between the tens of thousands of disclosed patents and the much lower number of truly essential patents is said to create an untransparent SEP-licensing landscape. The principal reason for such mismatch, however, is that SDO databases of disclosed patents were never intended to provide an accurate picture of truly essential patents to be used in licensing negotiations. For standardization, the much greater danger lies in the possibility of some patents remaining undeclared, thereby avoiding a FRAND commitment and jeopardizing successful market implementation. From that perspective, the broadest possible patent declarations are encouraged in order to guarantee that the standard will remain accessible to implementers on FRAND terms.

SEP licensing occurs both in bilateral negotiations and via patent pools. In bilateral negotiations, parties try to resolve various technical and commercial issues. Technical questions include:

  1. Whether and how many patents in a portfolio are truly essential;
  2. Whether such patents are infringed by standard-implementing products; and
  3. How many of these patents are valid.

Parties also need to agree on the commercial terms of a license, such as the level of royalties, the royalty-calculation methods, the availability of discounts, the amount of royalties for past sales, any cross-licensing provisions, etc.

SEP owners may also join their patents in a pool and license them in a single portfolio. Patent pools are known to significantly reduce transaction costs to all parties and provide a one-stop shop for implementers. Most licensing agreements are concluded amicably but, in cases where parties cannot agree, litigation may become necessary. The Huawei v ZTE case provided a framework for good-faith negotiation, and courts of the EU member states have become accustomed to evaluating the conduct of both parties.

What the Proposed Regulation Would Change

According to the Commission, SEP licensing is plagued with inefficiencies, apparently stemming from insufficient transparency and predictability regarding SEPs, uncertainty about FRAND terms and conditions, high enforcement costs, and inefficient enforcement.

As a solution, the leaked regulation would entrust the European Union Intellectual Property Office (EUIPO)—currently responsible for EU trademarks—with establishing a register of standards and SEPs, conducting essentiality checks that would assess whether disclosed patents are truly essential for a standard, providing the process to set up an aggregate royalty for a standard, and making individual FRAND-royalty determinations. The intention, it seems, is to replace market-based negotiations and institutions with centralized government oversight and price regulation.

How Many Standards and SEPs Are in the Regulation’s Scope?

From a legal standpoint, the first question raised by the regulation is, to what standards does it apply? The Commission, in its various studies, has often singled out 3G, 4G, and 5G cellular standards. This is probably because they have been in the headlines, due to international litigation and multi-million-euro FRAND determinations.

The regulation, however, would apparently apply to all SDOs that request SEP owners to license on FRAND terms and to any SEPs in force in any EU member state. This is a very broad definition that could potentially capture thousands of different standards across all sectors of the economy. Moreover, it isn’t limited just to European SDOs. All international SDOs that include at least one patent in an EU member state would also be ensnared by this rule.

To give a sense of the magnitude of the task, the European Telecommunications Standards Institute (ETSI), a large European SDO, boasts that it annually publishes between 2,000 and 2,500 standards, while the Institute of Electrical and Electronics Engineers (IEEE), an SDO based in the United States, claims to have more than 2,000 standards. Earlier studies found that there were at least 251 interoperability standards in a laptop, while an average smartphone is estimated to contain a minimum of 30 interoperability standards. In the laptop, 75% of standards were licensed under FRAND terms.

In short, we may be talking about thousands of standards to be reported and checked by the EUIPO. Not only is this duplicative work (SDOs already have their own databases), but it would entail significant costs to SEP owners.

Aggregate Royalties May Not Add Anything New

The proposed regulation would allow contributors to a standard (which aren’t limited to SEP owners; they could be any entity that submits technical contributions to an SDO, which may not be patented) to agree on the aggregate royalty for the standard. The idea behind aggregate royalty rates is to have transparency on the standard’s total price, so that implementers may account for royalties in the cost of their products. Furthermore, aggregate royalties may, theoretically, reduce the costs and facilitate SEP licensing, as the total royalty burden would be known in advance.

Beyond competition-law concerns (there are no mentions in the leaked regulation of any safeguards against exchanges of commercially sensitive information), it is not clear what practical effects the aggregate royalty-rate announcements would bring. Is it just a wishful theoretical maximum? To be on the safe side, contributors may just announce their maximum preference, knowing that—in the actual negotiations—prices would be lowered by caps and discounts. This is nothing new. We have already had individual SEP owners who publicly announced their royalty programs in advance for 4G and 5G. And patent pools bring price transparency to video-codec standards.

What’s more, agreement among all contributors is not required. Given that contributors have different business models (some may be vertically integrated, while others focus on technology development and licensing), it is difficult to imagine all of them coming to a consensus. The regulation would appear to allow different contributors to jointly notify their views on the aggregate royalty. This may add even more confusion to standard implementers. For example, some contributors could announce an aggregate rate of $10 per product, another 5% of the end-product price, while a third group would prefer a lower $1 per-product rate. In practice, the announcements of aggregate royalty rates may be meaningless.

Patent Essentiality Is Not the Same as Patent Infringement, Validity, or Value

The regulation also proposes to assess the essentiality of patents declared essential for a standard. It is hoped that this would improve transparency in the SEP landscape and help implementers assess with whom they need to license. For an implementer, however, it is important not only to know whether patents are essential for a standard, but also whether it infringes SEPs with its products and whether SEPs are valid.

A patent may be essential to a standard but not infringed by a concrete product. For example, a patent owner may have a 4G SEP that reads on base stations, but an implementer may manufacture and sell smartphones and thus does not infringe the relevant 4G SEP. Or a patent owner may hold SEPs that claim optional features of a standard, while an implementer may only use the standard’s mandatory features in its products. A study of U.S. SEP litigation found that SEPs were held to be infringed in only 30.7% of cases. In other words, in 69.3% of cases, an SEP was not considered to be infringed by accused products.

A patent may also be essential but invalid. Courts have the final say on whether granted patents fulfill patentability requirements. In the Unwired Planet v Huawei litigation in the UK, the court found two asserted patents valid, essential, and infringed, and two patents invalid.

Essentiality is, therefore, just one piece of the puzzle. Even if parties would accept the nonbinding essentiality determination (which is not guaranteed), they can still disagree over matters of infringement and validity. Essentiality checks are not a silver bullet that would eliminate all disputes.

Essentiality also should not be equated with the patent’s value. Not all patents are created equal. Some SEPs are related to breakthrough or core inventions, while others may be peripheral or optional. Economists have long found that the economic value of patents is highly skewed. Only a relatively small number of patents provide most of the value.

How Accurate and Reliable Is Sampling for Essentiality Assessments?

The leaked regulation provides that, every year, the EUIPO shall select a sample of claimed SEPs from each SEP owner, as well as from each specific standard, for essentiality checks. The Commission would adopt the precise methodology to ensure a fair and statistically valid selection that can produce sufficiently accurate results. Each SEP owner may also propose up to 100 claimed SEPs to be checked for essentiality for each specific standard.

The apparent goal of the samples is to reduce the costs of essentiality assessments. Analyzing essentiality is not a simple task. It takes time and money to produce accurate and reliable results. A thorough review of essentiality by patent pools was estimated to cost up to €10,000 and to last two to three days. Another study spent 40-50 working hours preparing claim charts that are used in essentiality assessments. If we consider that the EUIPO would potentially be directed to assess the essentiality of thousands of standards, it is easy to see how these costs could skyrocket and render the task impossible.

The use of samples is not without concerns. It inevitably introduces certain margins of error. Keith Mallinson has suggested that a sample size must be very large and include thousands of patents if any meaningful results are to be reached. It is therefore questionable why SEP owners would be limited to checking only 100 patents. Unless a widely accepted method to assess a large portfolio of declared patents were to be found, the results of these essentiality assessments would likely be imprecise and unreliable, and therefore fall far short of the goal of increased transparency.

The Dangers of a Top-Down Approach and Patent Counting for Royalty Determinations

Concealed in the regulation is the possibility that the EUIPO could use a top-down approach for royalty determinations, which provides that the SEP owner should receive a proportional share in the total aggregate royalty of a standard. It requires:

  1. Establishing a cumulative royalty for a standard; and then
  2. Calculating the share in the total royalty to an individual SEP owner.

Now we can see why the aggregate rate becomes important. The regulation would allow EUIPO to set up a panel of three conciliators to provide a nonbinding expert opinion on the aggregate royalty rate (in addition to, or regardless of, the rates already announced by contributors). Essentiality checks are also needed to filter out which patents are truly essential, and the number can be used to assess the individual share of SEP owners.

A detailed analysis of this top-down approach exceeds the scope of this post, but here are the key points:

  • The approach relies on patent counting, treating every patent as having the same value. We have seen that this is not the case, and that value is, instead, highly skewed. Moreover, essential patents may be invalid or not infringed by specific devices, which is not factored into the top-down calculations.
  • The top-down approach is not used in commercial-licensing negotiations, and courts have frequently rejected its application. Industry practice is to use comparable licensing agreements. The top-down approach was used in Unwired Planet v Huawei only as a cross-check for the rates derived from comparable agreements. TCL v Ericsson relied on this method, but was vacated on appeal. The most recent Interdigital v Lenovo judgment considered and rejected its use, finding “no value in Interdigital’s Top-Down cross-check in any of its guises.”
  • Fundamentally, the EUIPO’s top-down approach would be tantamount to direct government regulation of technology prices. So far, there are no studies suggesting that something is wrong with the level of royalties that might require government intervention. In fact, studies point to the opposite: prices are falling over time.

Conclusion

As discussed, the regulation provides an elaborate notification system of standards and declared SEPs, essentiality checks, and aggregate and individual royalty-rate determinations. Even with all these data points, however, it is not clear that it would help with licensing. Parties may not accept them and may still end up in court. 

Recent experience from the automotive sector demonstrates that knowing the essentiality and the price of SEPs did not translate into smoother licensing. Avanci is a platform that gathers almost all SEP owners for licensing 2G, 3G, and 4G SEPs to car manufacturers. It was intended to provide a one-stop-shop to licensees by offering a single price for the large portfolio of SEPs. All patents included in the Avanci platform were independently tested for essentiality. Avanci, however, was faced with the reluctance of implementers to take a licence. Only after litigating and prevailing did Avanci succeed in licensing the majority of the market. 

Paradoxically, the most innovative companies—the one that invest in the research and development of several different standardized solutions and rely on technology licensing as their business model—will bear the brunt of the regulation. It pays off, ironically, to be a user of standardized technology, rather than the innovator.

The introduction of such elaborate government regulation of SEP licensing also has important international ramifications. It is easy to imagine that other countries might not be so thrilled with European regulators setting the aggregate rate for international standards and individual rates for their companies’ portfolios. China, in particular, might see it as an example and set up its own centralized agencies for royalty determinations. What may happen if European, Chinese, or some other regulators come up with different aggregate and individual royalty rates? The whole international standardization system could crumble.

In short, the regulation imposes significant costs on SEP owners that innovate and contribute their technologies to international standardization. Faced with excessive costs and overregulation, companies may abandon open and collaborative international standardization, based on FRAND licensing, and instead work on proprietary solutions in smaller industry groups. This would allow them to escape the ambit of EU regulation. Whether this is a better alternative is up for debate.

The European Commission on March 27 showered the public with a series of documents heralding a new, more interventionist approach to enforce Article 102 of the Treaty on the Functioning of the European Union (TFEU), which prohibits “abuses of dominance.” This new approach threatens more aggressive, less economically sound enforcement of single-firm conduct in Europe.

EU courts may eventually constrain the Commission’s overreach in this area somewhat, but harmful business uncertainty will be the near-term reality. What’s more, the Commission’s new approach may unfortunately influence U.S. states that are considering European-style abuse-of-dominance amendments to their own substantive antitrust laws. As such, market-oriented U.S. antitrust commentators will need to be even more vigilant in keeping tabs of—and, where necessary, promptly critiquing—economically problematic shifts in European antitrust-enforcement policy.

The Commission’s Emerging Reassessment of Abuses of Dominance

In a press release summarizing its new initiative, the Commission made a “call for evidence” to obtain feedback on the adoption of first-time guidelines on exclusionary abuses of dominance under Article 102 TFEU.

In parallel, the Commission also published a “communication” announcing amendments to its 2008 guidance on enforcement priorities in challenging abusive exclusionary conduct. According to the press release, until final Article 102 guidelines are approved, this guidance “provides certain clarifications on its approach to determine whether to pursue cases of exclusionary conduct as a matter of priority.” An annex to the communication sets forth specific amendments to the 2008 guidance.

Finally, the Commission also released a competition policy brief (“a dynamic and workable effects-based approach to the abuse of dominance”) that discusses the policy justifications for the changes enumerated in the annex.

In short, the annex “toughens” the approach to abuse of dominance enforcement in five ways:

  1. It takes a broader view of what constitutes “anticompetitive foreclosure.” The Annex rejects the 2008 guidance’s emphasis on profitability (cases where a dominant firm can profitably maintain supracompetitive prices or profitably influence other parameters of competition) as key to prioritizing matters for enforcement. Instead, a new, far less-demanding prosecutorial standard is announced, one that views anticompetitive foreclosure as a situation “that allow[s] the dominant undertaking to negatively influence, to its own advantage and to the detriment of consumers, the various parameters of competition, such as price, production, innovation, variety or quality of goods or services.” Under this new approach, highly profitable competition on the merits (perhaps reflecting significant cost efficiencies) might be challenged, say, merely because enforcers were dissatisfied with a dominant firm’s particular pricing decisions, or the quality, variety, and “innovativeness” of its output. This would be a recipe for bureaucratic micromanagement of dominant firms’ business plans by competition-agency officials. The possibilities for arbitrary decision making by those officials, who may be sensitive to the interests of politically connected rent seekers (say, less-efficient competitors) are obvious.
  2. The annex diminishes the importance of economic efficiency in dominant-firm analysis. The Commission’s 2008 guidance specified that Commission enforcers “would generally intervene where the conduct concerned has already been or is capable of hampering competition from competitors that are considered to be as efficient as the dominant undertaking.” The revised 2023 guidance “recognizes that in certain circumstances a less efficient competitor should be taken into account when considering whether particular price-based conduct leads to anticompetitive foreclosure.” This amendment plainly invites selective-enforcement actions to assist less-efficient competitors, placing protection of those firms above consumer-welfare maximization. In order to avoid liability, dominant firms may choose to raise their prices or reduce their investments in cost-reducing innovations, so as to protect a relatively inefficient competitive fringe. The end result would be diminished consumer welfare.
  3. The annex encourages further micromanagement of dominant-firm pricing and other business decisions. Revised 2023 guidance invites the Commission to “examine economic data relating to prices” and to possible below-cost pricing, in considering whether a hypothetical as-efficient competitor would be foreclosed. Relatedly, the Commission encourages “taking into account other relevant quantitative and/or qualitative evidence” in determining whether an as-efficient competitor can compete “effectively” (emphasis added). This focus on often-subjective criteria such as “qualitative” indicia and the “effectiveness” of competition could subject dominant firms to costly new business-planning uncertainty. Similarly, the invitation to enforcers to “examine” prices may be viewed as a warning against “overaggressive” price discounting that would be expected to benefit consumers.
  4. The annex imposes new constraints on a firm’s decision as to whether or not to deal (beneficial voluntary exchange, an essential business freedom that underlies our free-market system – see here, for example). A revision to the 2008 guidance specifies that, “[i]n situations of constructive refusal to supply (subjecting access to ‘unfair conditions’), it is not appropriate to pursue as a matter of priority only cases concerning the provision of an indispensable input or the access to an essential facility.” This encourages complaints to Brussels enforcers by scores of companies that are denied an opportunity to deal with a dominant firm, due to “unfairness.” This may be expected to substantially undermine business efficiency, as firms stuck with the “dominant” label are required to enter into suboptimal supply relationships. Dynamic efficiency will also suffer, to the extent that intellectual-property holders are required to license on unfavorable terms (a reality that may be expected to diminish dominant firms’ incentives to invest in innovative activities).
  5. The annex threatens to increase the number of Commission “margin-squeeze” cases, whereby vertically integrated firms are required to offer favorable sales terms to, and thereby prop up, wholesalers who want to “compete” with them at retail. (See here for a more detailed discussion of the margin-squeeze concept.) The current standard for margin-squeeze liability already is far narrower in the United States than in Europe, due to the U.S. Supreme Court’s decision in linkLine (2009).

Specifically, the annex announces margin-squeeze-related amendments to the 2008 guidance. The amendments aim to clarify that “it is not appropriate to pursue as a matter of priority margin squeeze cases only where those cases involve a product or service that is objectively necessary to be able to compete effectively on the downstream market.” This extends margin-squeeze downstream competitor-support obligations far beyond regulated industries; how far, only time will tell. (See here for an economic study indicating that even the Commission’s current less-intrusive margin-squeeze policy undermines consumer welfare.) The propping up of less-efficient competitors may, of course, be facilitated by having the dominant firm take the lead in raising retail prices, to ensure that the propped-up companies get “fair margins.” Such a result diminishes competitive vigor and (once again) directly harms consumers.

In sum, through the annex’s revisions to the 2008 guidance, the Commission has, without public comment (and well prior to the release of new first-time guidelines), taken several significant steps that predictably will reduce competitive vitality and harm consumers in those markets where “dominant firms” exist. Relatedly, of course, to the extent that innovative firms respond to incentives to “pull their punches” so as not to become dominant, dynamic competition will be curtailed. As such, consumers will suffer, and economic welfare will diminish.

How Will European Courts Respond?

Fortunately, there is a ray of hope for those concerned about the European Commission’s new interventionist philosophy regarding abuses of dominance. Although the annex and the related competition policy brief cite a host of EU judicial decisions in support of revisions to the guidance, their selective case references and interpretations of judicial holdings may be subject to question. I leave it to EU law experts (I am not one) to more thoroughly parse specific judicial opinions cited in the March 27 release. Nevertheless, it seems to me that the Commission may face some obstacles to dramatically “stepping up” its abuse-of-dominance enforcement actions along the lines suggested by the annex. 

A number of relatively recent judicial decisions underscore the concerns that EU courts have demonstrated regarding the need for evidentiary backing and economic analysis to support the Commission’s findings of anticompetitive foreclosure. Let’s look at a few.

  • In Intel v. Commission (2017), the European Court of Justice (ECJ) held that the Commission had failed to adequately assess whether Intel’s conditional rebates on certain microprocessors were capable of restricting competition on the basis of the “as-efficient competitor” (AEC) test, and referred the case back to the General Court. The ECJ also held that the balancing of the favorable and unfavorable effects of Intel’s rebate practice could only be carried out after an analysis of that practice’s ability to exclude at least as-efficient-competitors.
  • In 2022, on remand, the General Court annulled the Commission’s determination (thereby erasing its 1.06 billion Euro fine) that Intel had abused its dominant position. The Court held that the Commission’s failure to respond to Intel’s argument that the AEC test was flawed, coupled with the Commission’s errors in its analysis of contested Intel practices, meant that the “analysis carried out by the Commission is incomplete and, in any event, does not make it possible to establish to the requisite legal standard that the rebates at issue were capable of having, or were likely to have, anticompetitive effects.”
  • In Unilever Italia (2023), the ECJ responded to an Italian Council of State request for guidance in light of the Italian Competition Authority’s finding that Unilever had abused its dominant position through exclusivity clauses that covered the distribution of packaged ice cream in Italy. The court found that a competition authority is obliged to assess the actual capacity to exclude by taking into account evidence submitted by the dominant undertaking (in this case, the Italian Authority had failed to do so). The ECJ stated that its 2017 clarification of rebate-scheme analysis in Intel also was applicable to exclusivity clauses.
  • Finally, in Qualcomm v. Commission (2022), the General Court set aside a 2018 Commission decision imposing a 1 billion Euro fine on Qualcomm for abuse of a dominant position in LTE chipsets. The Commission contended that Qualcomm’s 2011-2016 incentive payments to Apple for exclusivity reduced Apple’s incentive to shift suppliers and had the capability to foreclose Qualcomm’s competitors from the LTE-chipset market. The court found massive procedural irregularities by the Commission and held that the Commission had not shown that Qualcomm’s payments either had foreclosed or were capable of foreclosing competitors. The Court concluded that the Commission had seriously erred in the evidence it relied upon, and in its failure to take into account all relevant factors, as required under the 2022 Intel decision. 

These decisions are not, of course, directly related to the specific changes announced in the annex. They do, however, raise serious questions about how EU judges will view new aggressive exclusionary-conduct theories based on amendments to the 2008 guidance. In particular, EU courts have signaled that they will:

  1. closely scrutinize Commission fact-finding and economic analysis in evaluating exclusionary-abuse cases;
  2. require enforcers to carefully weigh factual and economic submissions put forth by dominant firms under investigation;
  3. require that enforcers take economic-efficiency arguments seriously; and
  4. continue to view the “as-efficient competitor” concept as important, even though the Commission may seek to minimize the test’s significance.

In other words, in the EU, as in the United States, reviewing courts may “put a crimp” in efforts by national competition agencies to read case law very broadly, so as to “rein in” allegedly abusive dominant-firm conduct. In jurisdictions with strong rule-of-law traditions, enforcers propose but judges dispose. The kicker, however, is that judicial review takes time. In the near term, firms will have to absorb additional business-uncertainty costs.

What About the States?

“Monopolization”—rather than the European “abuse of a dominant position”—is, of course, the key single-firm conduct standard under U.S. federal antitrust law. But the debate over the Commission’s abuse-of-dominance standards nonetheless is significant to domestic American antitrust enforcement.

Under U.S. antitrust federalism, the individual states are empowered to enact antitrust legislation that goes beyond the strictures of federal antitrust law. Currently, several major states—New York, Pennsylvania, and Minnesota—are considering antitrust bills that would add abuse of a dominant position as a new state antitrust cause of action (see here, here, here, and here). What’s more, the most populous U.S. state, California, may also consider similar legislation (see here). Such new laws would harmfully undermine consumer welfare (see my commentary here).

If certain states enacted a new abuse-of-dominance standard, it would be natural for their enforcers to look to EU enforcers (with their decades of relevant experience) for guidance in the area. As such, the annex (and future Commission guidelines, which one would expect to be consistent with the new annex guidance) could prove quite influential in promoting highly interventionist state policies that reach far beyond federal monopolization standards.

What’s worse, federal judicial case law that limits the scope of Sherman Act monopolization cases would have little or no influence in constraining state judges’ application of any new abuse-of-dominance standards. It is questionable that state judges would feel themselves empowered or even capable of independently applying often-confusing EU case law regarding abuse of dominance as a possible constraint on state officials’ prosecutions.

Conclusion

The Commission’s emerging guidance on abuse of dominance is bad for consumers and for competition. EU courts may constrain some Commission enforcement excesses, but that will take time, and new short-term business uncertainty costs are likely.

Moreover, negative effects may eventually also be felt in the United States if states enact proposed abuse-of-dominance prohibitions and state enforcers adopt the European Commission’s interventionist philosophy. State courts, applying an entirely new standard not found in federal law, should not be expected to play a significant role in curtailing aggressive state prosecutions for abuse of dominance.  

Promoters of principled, effects-based, economics-centric antitrust enforcement should take heed. They must be prepared to highlight the ramifications of both foreign and state-level initiatives as they continue to advocate for market-based antitrust policies. Sound law & economics training for state enforcers and judges likely will become more important than ever.