Brexit was supposed to free the United Kingdom from Brussels’ heavy-handed regulation and red tape. But dreams of a Singapore-on-the-Thames are slowly giving way to ill-considered regulation that threatens to erode Britain’s position as one of the world’s leading tech hubs.
The UK Competition and Markets Authority’s recent decision to block the merger of Microsoft and game-maker Activision-Blizzard offers a case in point. Less than a month after the CMA formally announced its opposition to the deal, the European Commission has thrown a spanner in the works. Looking at the same facts, the commission—no paragon of free-market thinking—concluded the merger would benefit competition and consumers, paving the way for it to move ahead in the Old Continent.
The two regulators disagree on the likely effects of Microsoft’s acquisition. The European Commission surmised that bringing Activision-Blizzard titles to Microsoft’s Xbox will create tougher competition for Sony, leading to lower prices and better games (conditional on several remedies). This makes sense. Sony’s PlayStation 5 is by far the market leader, currently outselling the Xbox four to one. Closing the content gap between these consoles will make the industry more competitive.
In contrast, the CMA’s refusal hinged on hypothetical concerns about the embryonic cloud-gaming market, which is estimated to be worth £2 billion worldwide, compared to £40 billion for console gaming. The CMA feared that, despite proposed temporary remedies, Microsoft would overthrow rivals by eventually making Activision-Blizzard titles exclusive to its cloud platform.
Unfortunately, this narrow focus on cloud gaming at the expense of the console market essentially amounts to choosing a bird in the bush instead of two in the hand. Worse, it highlights the shortcomings of the UK’s current approach to economic regulation.
Even if the CMA was correct on the substance of the case—and there are strong reasons to believe it is not—its decision would still be harmful to the UK economy. For one thing, this tough stance may cause two of the world’s leading tech firms to move thousands of jobs away from the UK. More fundamentally, foreign companies and startup founders will not want to tie themselves to a jurisdiction whose regulatory authorities show such disdain for the firms they host.
Given what we have already seen from the CMA, it would appear ill-advised to further increase the authority’s powers and reduce judicial oversight of its decisions. Yet that is precisely what the pending Digital Markets, Competition and Consumers Bill would do.
The bill would give the CMA vast authority to shape firms operating in “digital markets” according to its whims. It would cover almost any digital service offered by a firm whose turnover exceeds certain thresholds. And just like the CMA’s merger-review powers, these new rules would be subject to only limited judiciary oversight—judicial review rather than merits-based appeals.
The power to shape the internet in the UK (and, indirectly, abroad) would thus be entrusted to a regulator that fails to grasp that hypothetical and remediable concerns in one tiny market (cloud gaming) are no reason to block a transaction that has vast countervailing benefits in another (console gaming).
In turn, this threatens to deter startup creation in the UK. Firms will invest abroad if choosing the UK makes them vulnerable to the whims of an overzealous regulator, which would be the case under the digital markets bill. This could mean fewer tech jobs in the UK, as well as the erosion of London’s status as one of the world’s leading tech hubs.
The UK is arguably at the forefront of technologies like artificial intelligence and nuclear fusion. A tough merger-control policy that signals to startup founders that they will be barred from selling their companies to larger firms could have a disastrous impact on the UK’s competitiveness in those fields.
The upshot is that, when it comes to economic regulation, the United Kingdom is not an island. It cannot stand alone in a globalized world, where tech firms, startup founders, and VCs choose the jurisdictions that are most accommodating and that maximize the chance their businesses will thrive.
With Brexit now complete, the UK is free to replace legacy Brussels red tape with light-touch rules that attract foreign firms and venture capital investments. Yet the UK seems to be replicating many of Brussels’ shortcomings. Fortunately, there is still time for Parliament to change course on the digital markets bill.
The United Kingdom’s 2016 “Brexit” decision to leave the European Union created the opportunity for the elimination of unwarranted and excessive EU regulations that had constrained UK economic growth and efficiency.
Recognizing that fact, former Prime Minister Boris Johnson launched the Task Force on Innovation, Growth, and Regulatory Reform, whose May 2021 report recommended “a new regulatory vision for the UK.” That vision emphasized “[p]romot[ing] productivity, competition and innovation through a new framework of proportionate, agile and less bureaucratic regulation.”
Despite it containing numerous specific reform proposals, relatively little happened in the immediate wake of the report. Last week, however, the UK Department for Business and Trade announced an initial package of regulatory reforms intended to “reduce unnecessary regulation for businesses, cutting costs and allowing them to compete.” The initial package is focused on:
“reducing the business burden”;
“[e]nsuring regulation is, by default, the last rather than first response of Government”;
“[i]mproving regulators’ focus on economic growth by ensuring regulatory action is taken only when it is needed”;
“[p]romoting competition and productivity in the workplace”; and
“[s]timulating innovation, investment and growth by announcing two strategic policy statements to steer our regulators.”
For too long the UK’s approach to regulation has been warped by a strange kind of numbers game: how many laws can be removed? What percentage of EU laws on the UK rule book can be dispensed with? how many quangos can go on the bonfire?
It’s the kind of misguided approach that has led to headline-grabbing projects like the revival of imperial measures – a purely symbolic gesture that did nothing to improve competition, liberalise the economy or raise people’s living standards.
Rather than this rather performative approach, our new book Trade, Competition and Domestic Regulatory Policy suggests a very different approach to regulatory reform.
First, does the proposed reform establish a framework that can be used to ensure that future regulation is as pro-competitive as possible. Are actual mechanisms established or are the principles merely hortatory?
Second, how does the reform impact the stock of existing regulation? How precisely will those regulations be made more proportionate, subject to the test of necessity, and generate pro-competitive and open trade outcomes?
Third, is there a moral philosophical choice embedded in the approach? This will be vital to ensuring that reform is not some random hotch-potch of ideas, designed more for a tabloid front page than as a real, sustainable and concrete reform.
Encouragingly, if we look through these lenses in turn, we find that the beginnings of a framework are emerging here in the UK.
The Government’s recent package of regulatory reform has much to commend it. It establishes an overall set of governing principles for future regulation, and also requires the review of our existing stock of regulation, including the body of EU rules that are still part of UK law. The focus on necessity, proportionality and competition is particularly welcome, as is the consideration of how regulation affects economic growth.
It’s not perfect – we do think, for instance, that the framework could go farther and actually embed the Competition and Markets Authority into the regulatory promulgation process more concretely. This should not be controversial. The OECD itself made these recommendations in its Regulatory Toolkit and Competition Assessment some 20 years ago, which was coincidentally the time when the spread of regulatory distortions seemed to accelerate. The International Competition Network (ICN), comprised of most national competition agencies, has also recommended that those agencies advocate for competition in the regulatory promulgation process.
The UK has indicated that they would apply this approach to the stock of regulation, much of which is retained EU law. This represents an opportunity for the UK, as most countries do not have a readily identifiable corpus of regulation to start with. Certainly it is helpful to ensure that common law approaches are applied to the entire UK rule book (including any retained EU law), and that UK interpretation (by judges, and the executive branch) trumps any interpretation of the Court of Justice of the European Union. Of course, it would have been better to have undertaken this task six years ago, when we knew it would be necessary.
Where there is less clarity, it is around the philosophic underpinnings of this regulatory approach, which is regrettable. Back in the early 2000s, the OECD recognised the long-held view that pro-competitive regulation does indeed stimulate an increase in GDP per capita. Separately, this has also been recognised for open trading systems and property rights protection.
None of this should be remotely controversial in the UK, or indeed anywhere else. It is unfortunate that it has become so, largely because of an approach based on a Manichaean view that all EU regulations are bad, and all UK regulations are good, and that success is to be judged on the number of EU rules removed.
More generally, all other nations would also benefit from systematic regulatory reform that aims to ferret out the many anticompetitive market distortions that severely limit economic growth and welfare enhancement. We discuss this topic at length in our recent book on Trade, Competition, and Domestic Regulatory Policy.
The United Kingdom’s Competition and Markets Authority (CMA) late last month moved to block Microsoft’s proposed vertical acquisition of Activision Blizzard, a video-game developer that creates and publishes games such as Call of Duty, World of Warcraft, Diablo, and Overwatch. Microsoft summarized this transaction’s substantial benefits to video game players in its January 2022 press release announcing the proposed merger.
The CMA based its decision on speculative future harm in UK cloud-based gaming, neglecting the dramatic and far more likely dynamic competitive benefits the transaction would produce in gaming markets. The FTC announced its own challenge to the merger in December and has scheduled administrative hearings into the matter later in 2023.
If not overturned on appeal, the CMA’s decision is likely to reduce future consumer welfare and innovation in the gaming sector, to the detriment of producers and consumers.
Microsoft has a strong position in cloud gaming services and the evidence available to the CMA showed that Microsoft would find it commercially beneficial to make Activision’s games exclusive to its own cloud gaming service.
Microsoft already accounts for an estimated 60-70% of global cloud gaming services and has other important strengths in cloud gaming from owning Xbox, the leading PC operating system (Windows) and a global cloud computing infrastructure (Azure and Xbox Cloud Gaming).
The deal would reinforce Microsoft’s advantage in the market by giving it control over important gaming content such as Call of Duty, Overwatch, and World of Warcraft. The evidence available to the CMA indicates that, absent the merger, Activision would start providing games via cloud platforms in the foreseeable future.
The CMA’s discussion ignores a number of salient facts regarding cloud gaming. Cloud gaming has not yet arrived as a major competitor to device-based gaming, as Dirk Auer points out (see also here regarding problems that have constrained the rapid emergence of cloud gaming). Google, for example, discontinued its Stadia cloud-gaming service just over three months ago, “after having failed to gain the traction that the company was expecting” (see here). Although cloud gaming does not require the purchase of specific gaming devices, it does require substantial bandwidth, stable internet connections, and subscriptions to particular services.
What’s more, Microsoft offered the CMA significant concessions to ensure that leading Activision games would remain available on other platforms for at least 10 years (see here, for example). The CMA itself acknowledged this in announcing its opposition to the merger, but rejected Microsoft’s proposals, stating:
Accepting Microsoft’s remedy would inevitably require some degree of regulatory oversight by the CMA. By contrast, preventing the merger would effectively allow market forces to continue to operate and shape the development of cloud gaming without this regulatory intervention.
Ironically, the real “regulatory intervention” that threatens to hinder market forces is the CMA’s blocking of this transaction, which (as a vertical merger) does not eliminate any direct competition and, to the contrary, promises to reinvigorate direct competition with Sony’s PlayStation. As Aurelien Portuese explains:
Sony is cheering on . . . attempt[s] to block Microsoft’s acquisition of Activision. Why? The proposed merger is a bid to offer a robust platform with high-quality games and provide resources for creators to produce more gaming innovation. That’s great for gamers, but threatening to Japanese industry titans Sony and Nintendo, because it would also create a company capable of competing with them more effectively.
If antitrust officials block the merger, they would be giving Sony and its 70 percent share of the global gaming console market the upper hand while preventing Microsoft and its 30 percent market share from effectively challenging the incumbent. That would be a complete reversal of competition policy.
The Japanese gaming industry dominates the world—and yet, U.S. antitrust officials may very well further cement this already decades-long dominance by blocking the Activision-Microsoft merger. Wielding antitrust to impose a twisted conception of domestic competition at the expense of global competitiveness must end, and the proposed Activision-Microsoft combination exemplifies why.
Furthermore, Portuese debunks the notion that Microsoft would have a future incentive to deny access to Activision’s high-selling Call of Duty franchise, reemphasizing the vigorous nature of gaming competition post-merger:
[T]he very idea that Microsoft would want to foreclose access to “Call of Duty” for PlayStation users is controversial. Microsoft would rationally have little incentive to reduce sales across platforms of a popular game. Moreover, Microsoft’s competitive position is weaker than the FTC seems to think: It faces competition from gaming industry incumbents such as Sony, Nintendo, and Epic Games, and from other large tech companies such as Apple, Amazon, Google, Tencent, and Meta.
In short, there are strong reasons to believe that gaming competition would be enhanced by the Microsoft-Activision merger. What’s more, the merger would likely generate efficiencies of integration, such as the promotion of cross-team collaboration (see here, for example). Notably, in announcing its decision to block the merger, even the CMA acknowledged “the benefit of having Activision’s content available on [Microsoft’s subscription service] Game Pass.” In contrast, theoretical concerns about merger-related potential threats to future cloud-gaming competition are uncertain and not well-grounded.
The CMA should not have blocked the merger. The agency’s opposition to this transaction reflects a blinkered focus on questionable possible future harm in a not-yet developed market, and a failure to properly weigh likely substantial near-term competitive benefits in a thriving existing market.
This is the sort of decision that tends to discourage future procompetitive efficiencies-generating high-tech acquisitions, to the detriment of producers and consumers.
The threat to future vertical mergers that bring together complementary assets to generate attractive new offerings for consumers in dynamically evolving market sectors is particularly unfortunate. Competition agencies should reflect on this reality and rethink their approaches. (FTC, are you paying attention?)
Output of the LG Research AI to the prompt: “artificial intelligence regulator”
It appears that the emergence of ChatGPT and other artificial-intelligence systems has complicated the European Union’s efforts to implement its AI Act, mostly by challenging its underlying assumptions. The proposed regulation seeks to govern a diverse and rapidly growing AI landscape. In reality, however, there is no single thing that can be called “AI.” Instead, the category comprises various software tools that employ different methods to achieve different objectives. The EU’s attempt to cover such a disparate array of subjects under a common regulatory framework is likely to be ill-fitted to achieve its intended goals.
Overview of the AI Act
As proposed by the European Commission, the AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights. The proposal defines AI systems broadly to include any software that uses machine learning, and sorts them into three risk levels: unacceptable, high, and limited risk. Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments. Limited-risk systems face certain requirements specifically related to adequate documentation and transparency.
As my colleague Mikolaj Barczentewicz has pointed out, however, the AI Act remains fundamentally flawed. By defining AI so broadly, the act would apply even to ordinary general-purpose software, let alone to software that uses machine learning but does not pose significant risks. The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of objectively harmless software.
Understanding Regulatory Overaggregation
Regulatory overaggregation—that is, the grouping of a huge number of disparate and only nominally related subjects under a single regulatory regime embodied by an abstract concept—is not a new issue. We can see evidence of it in the EU’s previous attempts to use the General Data Protection Regulation (GDPR) to oversee the vast domain of “privacy.”
“Privacy” is a capacious concept that includes, for instance, both the creeped-out feelings that certain individual users may feel in response to being tracked by adtech software, as well as potential violations of individuals’ expectations of privacy in location data when cell providers sell data to bounty hunters. In truth, what we consider “privacy” comprises numerous distinct problem domains better defined and regulated according to the specific harms they pose, rather than under one all-encompassing regulatory umbrella.
Similarly, “AI” regulation faces the challenge of addressing various mostly unrelated concerns, from discriminatory bias in lending or hiring to intellectual-property usage to opaque algorithms employed for fraudulent or harmful purposes. Overaggregated regulation, like the AI Act, results in a framework that is both overinclusive (creating unnecessary burdens on individuals and businesses) and underinclusive (failing to address potential harms in its ostensible area of focus, due to its overly broad scope).
In other words, as noted by Kai Zenner, an aide to Minister of European Parliament Axel Voss, the AI Act is obsessed with risks to the detriment of innovation:
This overaggregation is likely to hinder the AI Act’s ability to effectively address the unique challenges and risks associated with the different types of technology that constitute AI systems. As AI continues to evolve rapidly and to diversify in its applications, a one-size-fits-all approach may prove inadequate to the specific needs and concerns of different sectors and technologies. At the same time, the regulation’s overly broad scope threatens to chill innovation by causing firms to second guess whether they should use algorithmic tools.
Disaggregating Regulation and Developing a Proper Focus
The AI landscape is complex and constantly changing. Its systems include various applications across industries like health care, finance, entertainment, and security. As such, a regulatory framework to address AI must be flexible and adaptive, capable of accommodating the wide array of AI technologies and use cases.
More importantly, regulatory frameworks in general should focus on addressing harms, rather than the technology itself. If, for example, bias is suspected in hiring practices—whether facilitated through an AI algorithm or a simple Excel spreadsheet—that issue should be dealt with as a matter of labor law. If labor law or any other class of laws fails to account for the negative use of algorithmic tools, they should be updated accordingly.
Similar nuance should be applied in areas like intellectual property, criminal law, housing, fraud, and so forth. It’s the illicit behavior of bad actors that we want the law to capture, not to adopt a universal position on a particular software tool that might be used differently in different contexts.
To reiterate: it is the harm that matters, and regulations should be designed to address known or highly likely harms in well-defined areas. Creating an overarching AI regulation that attempts to address every conceivable harm at the root technological level is impractical, and could be a burdensome hindrance on innovation. Before it makes a mistake, the EU should reconsider the AI Act and adopt a more targeted approach to the harms that AI technologies may pose.
As it currently stands, it appears the regulation will significantly increase costs to the most innovative companies that participate in multiple standardization activities. It would, for instance, regulate technology prices, limit the enforcement of patent rights, and introduce new avenues for further delays in SEP-licensing negotiations.
It also might harm the EU’s innovativeness on the global stage and set precedents for other countries to regulate, possibly jeopardizing how the entire international technical-standardization system functions.
The regulation originates from last year’s call by the European Commission to establish principles and implement measures that will foster a “balanced,” “smooth,” and “predictable” framework for SEP licensing. With this in mind, the reform aims “to promote an efficient and sustainable SEP licensing ecosystem, where the interests of both SEP holders and implementers are considered” [emphasis added]. As explicitly mentioned in the call, the main problems affecting the SEP ecosystem are holdup, holdout,and forum shopping.
Unfortunately, it is far from clear these premises are correct or that they justify the sort of regulation the Commission now contemplates.
The draft regulation purports to fix a broken regime by promoting efficient licensing and ensuring a fair balance between the interests of patent holders and implementers, in order to mitigate the risks of both holdup and holdout as requested in well-established case law and, in particular, by the Court of Justice’s (CJEU) landmark Huawei v. ZTE case.
There is, however, scant evidence that the current SEP-licensing regime is inefficient or unbalanced. The best evidence is that SEP-reliant industries are no less efficient than other innovative industries. Likewise, SEP holders do not appear to be capturing the lion’s share of profits in the industries where they operate. In short, it’s not clear that there is any problem to solve in the first place.
There is also scant evidence that the Commission has taken account of hugely important geopolitical considerations. Policymakers are worried that Chinese companies (with the support of Chinese courts and authorities) may use litigation strategies to obtain significantly lower “fair, reasonable, and non-discriminatory” (FRAND) rates.
Indeed, the EU filed a case against China at the World Trade Organization (WTO) last year that complained about the strategic use of anti-suit injunctions (ASIs)—that is, orders restraining a party either from pursuing foreign proceedings or enforcing a judgment obtained in foreign proceedings. As explained in a recent paper, this trend could have severe economic repercussions, given that the smooth implementation of mobile-telecommunication standards is crucial to the economic potential of both the so-called “Internet of Things” and U.S. communications infrastructure writ large.
By disproportionately targeting inventors (as we argue below), the draft regulation penalizes precisely those companies that, from the perspective of geopolitics, it should be protecting (or, at least, not undermining). Indeed, as the Commission’s impact assessment warns, the share of SEPs owned by Chinese companies has increased dramatically in recent years. Penalizing European inventors will only exacerbate this trend.
Missing the Mark
Given the importance of achieving a balance between holdup and holdout, as well as avoiding steps that could reinforce China’s position on the geopolitical map, the leaked version of the forthcoming EU regulation is deeply concerning, to say the least.
Rather than wrestling with these complex issues, the proposal essentially focuses on ensuring that implementers receive licenses at affordable royalty rates. In other words, it would create significant red tape and compliance costs in an attempt to address an issue that is mostly peripheral to the stated aims, and arguably already dealt with by EU courts in Huawei v. ZTE. That decision, notably, forces parties to negotiate royalties in good faith before they can pursue judicial remedies, such as ASIs.
Critically, the proposal surmises that there is currently little transparency regarding the aggregate royalties that implementers pay for all the SEPs that underpin a standard. The proposal assumes that making this information public would enable implementers to make better determinations when they negotiate royalties.
To address this, the proposal creates several mandatory procedures that ultimately serve to make information on total royalty burdens public. It also creates a procedure that parties can use to obtain nonbinding FRAND royalty determinations from third-party arbitrators. More precisely, if contributors do not agree on an aggregate royalty sufficiently early before components and products implementing the standardized technology are put on the market, implementers and/or contributors can ask the EU Intellectual Property Office (EUIPO) to appoint conciliators with recommending an aggregate royalty (with exceedingly limited ability to appeal such decisions).
The proposal has at least two important drawbacks.
To start, it is unclear what a nonbinding royalty recommendation would achieve. On the one hand, backers might hope the nonbinding recommendations will, de facto, be transposed by national courts when they rule on FRAND disputes. This may well be correct, but it is far from ideal. One of the great strengths of the current system is that courts in different jurisdictions compete to become the forum of choice for royalty disputes. In doing so, they constantly refine the way they rule on such disputes. Replacing this emergent equilibrium with a one-size-fits-all approach would be a great loss.
Conversely, it’s plausible that national courts will continue to go about their daily business, largely ignoring the EUIPO royalty recommendations. If that were the case, one could legitimately ask what a lengthy and costly system of nonbinding royalty determinations really achieves. Whatever the case, the draft regulation offers little vision as to how its planned royalty determinations will improve actual outcomes.
A second important issue is that, in its current form, the proposal seems myopically focused on prices. This is a problem because licensing negotiations involve a much broader range of terms. Such considerations as available remedies and penalties, license-termination conditions, cross-licensing, and jurisdiction are often just as important as price.
Not only are these issues conspicuously absent from the draft regulation, but properly accounting for them would largely undermine the regulation’s price-comparison mechanism, as this heterogeneity shows such comparisons are essentially apples to oranges.
Along similar lines, the draft regulation also includes a system of sampling to determine whether patents are truly essential to the underlying standard. These checks would be conducted by independent evaluators selected according to criteria to be determined by the Commission, and based on a methodology to be developed by the Commission, to ensure that the sample can produce statistically valid results.
It’s unclear how much such a mechanism would enhance the status quo. Moreover, according to the proposal, the results of these essentiality checks would also not be legally binding. Rather than enhancing SEP-licensing negotiations and safeguarding the effectiveness of essentiality checks, this solution would just exacerbate holdout concerns. Indeed, implementers may use the process to delay negotiations or avoid payment of royalties while the process is ongoing.
The Commission’s proposal also sends all the wrong signals internationally. In turn, this may undermine the geopolitical interests of both the EU and the United States.
By signaling its willingness to more closely interfere with the royalty rates agreed between inventors and implementers—even for patents registered outside the EU—the EU is effectively inviting other jurisdictions to do the same (or legitimizing ongoing efforts to do so).
This is far from ideal. For instance, Chinese government officials and courts have increasingly sought to influence and rule on global FRAND disputes, generally in ways that favor its own firms, which are largely on the implementer side of disputes. The EU’s proposal sends a strong signal that it is fair game for government agencies to more directly influence global FRAND royalty rates, as well as seeking to override the decisions of foreign courts.
In short, the EU’s draft regulation will embolden foreign jurisdictions to respond in kind and seek further authority over the royalty rates agreed upon by private parties. Ultimately, this will infuse the SEP-licensing space with politized oversight and vindicate China’s moves to depress the value of the West’s intellectual property, thus giving its state-backed rivals a leg up. At a time when geopolitical tensions between China and the West are historically high, such a move seems particularly ill-advised.
In sum, rather than strike a balance between patent owners’ and implementers’ interests, the EU proposal is one-sided. It only introduces burdens on SEP holders and disregards the significant risks of holdout strategies. Such a framework for SEP licensing would be at odds with the framework crafted by the CJEU in Huawei.
Further, it would also undermine the value of many SEPs in ways that would be particularly appreciated by Chinese policymakers. The consequences of such an approach would be disruptive for entire SEP-reliant industries, and for the EU’s economic interests.
To tackle these supposed ills, which range from exclusionary practices and disinformation to encroachments on privacy and democratic institutions, it is asserted that sweeping new ex ante rules must be enacted and the playing field tilted in favor of enforcement agencies, which have hitherto faced what advocates characterize as insurmountable procedural hurdles (here and here).
Unfortunately, the DMA is not the paragon of regulation that it is sometimes made out to be. Indeed, the law is structured less to forward any purportedly universal set of principles, but instead to align digital platforms’ business models with an idiosyncratic and specifically European industrial policy, rooted in politics and protectionism. As explained below, it is unlikely other countries would benefit from emulating this strategy.
The DMA’s Protectionist Origins
While the DMA is today often lauded as eminently pro-competition (here and here), prior to its adoption, many leading European politicians were touting the text as a protectionist industrial-policy tool that would hinder U.S. firms to the benefit of European rivals: a far cry from the purely consumer-centric tool it is sometimes made out to be. French Minister of the Economy Bruno Le Maire, for example, acknowledged as much in 2021 when he said:
Digital giants are not just nice companies with whom we need to cooperate, they are rivals, rivals of the states that do not respect our economic rules, which must therefore be regulated… There is no political sovereignty without technological sovereignty. You cannot claim sovereignty if your 5G networks are Chinese, if your satellites are American, if your launchers are Russian and if all the products are imported from outside.
This logic dovetails neatly with the EU’s broader push for “technology sovereignty,” a strategy intended to reduce the continent’s dependence on technologies that originate abroad. The strategy already has been institutionalized at different levels of EU digital and industrial policy (see here and here). In fact, the European Parliament’s 2020 Briefing on “Digital Sovereignty for Europe” explicitly anticipates that an ex ante regulatory regime similar to the DMA would be a central piece of that puzzle. French President Emmanuel Macron summarized it well when he said:
If we want technological sovereignty, we’ll have to adapt our competition law, which has perhaps been too much focused solely on the consumer and not enough on defending European champions.
Moreover, it can be argued that the DMA was never intended to promote European companies that could seriously challenge the dominance of U.S. firms (see here at 13:40-14:20). Rather, the goal was always to redistribute rents across the supply chain away from digital platforms and toward third parties and competitors (what is referred to as “business users,” as opposed to “end users”). After all, with the arguable exception of Spotify and Booking.com, the EU has none of the former, and plenty of the latter. Indeed, as Pablo Ibañez Colomo has written:
The driver of many disputes that may superficially be seen as relating to leveraging can be more rationalised, more convincingly, as attempts to re-allocate rents away from vertically-integrated incumbents to rivals.
Alternative Digital Strategies to the DMA
While the DMA strives to use universal language and has a clear ambition to set global standards, under this veneer of objectivity lies a very particular vision of industrial policy and a certain normative understanding of how rents should be allocated across the value chain. That vision is not apt for everyone and, indeed, may not be apt for anyone (see here). Other countries can certainly look to the EU for inspiration and, admittedly, it would be ludicrous to expect them to ignore what goes on in the bloc.
When deciding whether and what sort of legislation to enact, however, other countries should ultimately seek those approaches that are appropriate to their own context. What they ought not do is reflexively copy templates made with certain goals in mind, which they might not share and which may be diametrically opposed to their own interests or values. Below are some suggestions for alternative strategies to the DMA.
While perhaps the EU—the world’s third largest economy—can afford to impose costly and burdensome regulation on digital companies because it has considerable leverage to ensure (with some, though as we have seen, by no means absolute, certainty) that they will not desert the European market, smaller economies that are unlikely to be seen by GAMA as essential markets are playing a different game.
Not only do they have much smaller carrots to dangle, but they also disproportionately benefit from the enormous infrastructural investments and consumer benefits brought by GAMA (see, for example, here and here). In this context, the wiser strategy for smaller, ostensibly “nonessential” markets might be to court GAMA, rather than to castigate it. Instead of imposing intricate, costly, and untested regulatory obligations on digital platforms, these countries may reasonably wish to emphasize or bolster the transparency, predictability, and procedural safeguards (including credible judicial review) of their competition-law systems. After all, to regulate competition, you must first attract it.
Indeed, while competition is as important in developing markets as developed ones, developing markets are especially dependent upon competition rules that encourage investment in infrastructure to facilitate economic growth and that offer a secure environment for ongoing innovation. Particularly for relatively young, rapidly evolving industries like digital markets, attracting consistent investment and industry know-how ensures that such markets can innovate and transition into maturity (here and here).
Moreover, the case-by-case approach of competition law allows enforcers to tackle harmful behavior while capturing digital platforms’ procompetitive benefits, rather than throwing the baby out with the bathwater by imposing blanket prohibitions. As Giuseppe Colangelo has suggested, the assumption that competition laws are insufficient to tackle anticompetitive conduct in digital markets is a questionable one, given that most of the DMA’s contemplated prohibitions have also been the object of separate antitrust suits in the EU.
Careful Consideration of Costs and Unintended Consequences
DMA-style ex ante regulation is still untested. Its benefits, if any, still remain mostly theoretical. A tradeoff between, say, foreign direct investment (FDI) and ex ante regulation might make sense for some emerging markets if it was clear what was being traded, and at what cost. Alas, such regulations are still in an incipient phase.
The U.S. antitrust bills targeting a handful of companies seem unlikely to be adopted soon; the UK’s Digital Markets Unit proposal has still not been put to Parliament; and Japan and South Korea have imposed codes of conduct only in narrow areas. Even the DMA—the most comprehensive legislative attempt to “rein in” digital companies—entered into force only last October, and it will not start imposing its obligations on gatekeepers until February or March 2024, at the earliest.
Despite the uncertainty inherent in deploying experimental regulation in a fast-moving market, the EU has clearly decided that these risks are not sufficient to offset the DMA’s benefits (see here for a critical appraisal). But other countries should not take their word for it.
In conducting an independent examination, they may place more value on some of the DMA’s expected negative consequences, or may find their likelihood of occurring to be unacceptably high. This could be due to endogenous or highly context-dependent factors. In some cases, the tradeoff could mean too large a sacrifice of FDI, while in others, the rules could impinge on legitimate policy priorities, like national security. In either case, countries should evaluate the risks and benefits of the ex ante regulation of digital platforms themselves, and go their own way.
Giving enforcers wide discretionary powers to reshape digital markets and override product-design decisions might not be a good idea in countries with a poor track record of keeping corruption in check, or where enforcers lack the required know-how to do so effectively. Simple norms, backed by the rule of law, may not be sufficient to counteract these background conditions. But they also may be preferable to the broad mandates and tools envisioned by the kinds of ex ante regulatory proposals currently in vogue.
Smaller countries with limited budgets would probably also benefit more from castigating unequivocally harmful (and widespread) conduct, like cartels (the “cancers of the market economy”), bid rigging, distortive state aid, and mergers that create actual monopolies (see, for example, here and here), rather than applying experimental regulation underpinned by tenuous theories of harm and indeterminate benefits .
In the end, the DMA has been mistakenly taken to be a panacea or a blueprint for how to regulate tech, when it is neither of these two things. It is, instead, a particularistic approach that may or may not achieve its stated goals. In any case, it must be understood as an outgrowth of a certain industrial-policy strategy and a sui generis vision of how digital markets should distribute rents (spoiler alert: in the interest of European companies).
[The following is a guest post from Igor Nikolic, a research fellow at the European University Institute.]
The European Commission is working on a legislative proposal that would regulate the licensing framework for standard-essential patents (SEPs). A regulatory proposal leaked to the press has already been the subject of extensive commentary (see here, here, and here). The proposed regulation apparently will include a complete overhaul of the current SEP-licensing system and will insert a new layer of bureaucracy in this area.
This post seeks to explain how the EU’s current standardization and licensing system works and to provide some preliminary thoughts on the proposed regulation’s potential impacts. As it currently stands, it appears the regulation will significantly increase costs to the most innovative companies that participate in multiple standardization activities. It would, for instance, regulate technology prices, limit the enforcement of patent rights, and introduce new avenues for further delays in SEP-licensing negotiations.
It also might harm the EU’s innovativeness on the global stage and set precedents for other countries to regulate, possibly jeopardizing how the entire international technical-standardization system functions. An open public discussion about the regulation’s contents might provide more time to think about the goals the EU wants to achieve on the global technology stage.
How the Current System Works
Modern technological standards are crucial for today’s digital economy. 5G and Wi-Fi standards, for example, enable connectivity between devices in various industries. 5G alone is projected to add up to €1 trillion to the European GDP and create up to 20 million jobs across all sectors of the economy between 2021 and 2025. These technical standards are typically developed collaboratively through standards-development organizations (SDOs) and include patented technology, called standard-essential patents (SEPs).
Companies working on the development of standards before SDOs are required to disclose patents they believe to be essential to a standard, and to commit to license such patents on fair, reasonable and non-discriminatory (FRAND) terms. For various reasons that are inherent to the system, there are far more disclosed patents that are potentially essential than there are patents that end up truly being essential for a standard. For example, one study calculated that there were 39,000 and 45,000 patents declared essential 3G UMTS and 4G LTE, respectively, while another estimated as many as 95,000 patent declarations for 5G. Commercial studies and litigated cases, however, provide a different picture. Only about 10% to 40%, respectively, of the disclosed patents were held to be truly essential for a standard.
The discrepancy between the tens of thousands of disclosed patents and the much lower number of truly essential patents is said to create an untransparent SEP-licensing landscape. The principal reason for such mismatch, however, is that SDO databases of disclosed patents were never intended to provide an accurate picture of truly essential patents to be used in licensing negotiations. For standardization, the much greater danger lies in the possibility of some patents remaining undeclared, thereby avoiding a FRAND commitment and jeopardizing successful market implementation. From that perspective, the broadest possible patent declarations are encouraged in order to guarantee that the standard will remain accessible to implementers on FRAND terms.
SEP licensing occurs both in bilateral negotiations and via patent pools. In bilateral negotiations, parties try to resolve various technical and commercial issues. Technical questions include:
Whether and how many patents in a portfolio are truly essential;
Whether such patents are infringed by standard-implementing products; and
How many of these patents are valid.
Parties also need to agree on the commercial terms of a license, such as the level of royalties, the royalty-calculation methods, the availability of discounts, the amount of royalties for past sales, any cross-licensing provisions, etc.
SEP owners may also join their patents in a pool and license them in a single portfolio. Patent pools are known to significantly reduce transaction costs to all parties and provide a one-stop shop for implementers. Most licensing agreements are concluded amicably but, in cases where parties cannot agree, litigation may become necessary. The Huawei v ZTE case provided a framework for good-faith negotiation, and courts of the EU member states have become accustomed to evaluating the conduct of both parties.
What the Proposed Regulation Would Change
According to the Commission, SEP licensing is plagued with inefficiencies, apparently stemming from insufficient transparency and predictability regarding SEPs, uncertainty about FRAND terms and conditions, high enforcement costs, and inefficient enforcement.
As a solution, the leaked regulation would entrust the European Union Intellectual Property Office (EUIPO)—currently responsible for EU trademarks—with establishing a register of standards and SEPs, conducting essentiality checks that would assess whether disclosed patents are truly essential for a standard, providing the process to set up an aggregate royalty for a standard, and making individual FRAND-royalty determinations. The intention, it seems, is to replace market-based negotiations and institutions with centralized government oversight and price regulation.
How Many Standards and SEPs Are in the Regulation’s Scope?
From a legal standpoint, the first question raised by the regulation is, to what standards does it apply? The Commission, in its variousstudies, has often singled out 3G, 4G, and 5G cellular standards. This is probably because they have been in the headlines, due to international litigation and multi-million-euro FRAND determinations.
The regulation, however, would apparently apply to all SDOs that request SEP owners to license on FRAND terms and to any SEPs in force in any EU member state. This is a very broad definition that could potentially capture thousands of different standards across all sectors of the economy. Moreover, it isn’t limited just to European SDOs. All international SDOs that include at least one patent in an EU member state would also be ensnared by this rule.
To give a sense of the magnitude of the task, the European Telecommunications Standards Institute (ETSI), a large European SDO, boasts that it annually publishes between 2,000 and 2,500 standards, while the Institute of Electrical and Electronics Engineers (IEEE), an SDO based in the United States, claims to have more than 2,000 standards. Earlier studies found that there were at least 251 interoperability standards in a laptop, while an average smartphone is estimated to contain a minimum of 30 interoperability standards. In the laptop, 75% of standards were licensed under FRAND terms.
In short, we may be talking about thousands of standards to be reported and checked by the EUIPO. Not only is this duplicative work (SDOs already have their own databases), but it would entail significant costs to SEP owners.
Aggregate Royalties May Not Add Anything New
The proposed regulation would allow contributors to a standard (which aren’t limited to SEP owners; they could be any entity that submits technical contributions to an SDO, which may not be patented) to agree on the aggregate royalty for the standard. The idea behind aggregate royalty rates is to have transparency on the standard’s total price, so that implementers may account for royalties in the cost of their products. Furthermore, aggregate royalties may, theoretically, reduce the costs and facilitate SEP licensing, as the total royalty burden would be known in advance.
Beyond competition-law concerns (there are no mentions in the leaked regulation of any safeguards against exchanges of commercially sensitive information), it is not clear what practical effects the aggregate royalty-rate announcements would bring. Is it just a wishful theoretical maximum? To be on the safe side, contributors may just announce their maximum preference, knowing that—in the actual negotiations—prices would be lowered by caps and discounts. This is nothing new. We have already had individual SEP owners who publicly announced their royalty programs in advance for 4G and 5G. And patent pools bring price transparency to video-codec standards.
What’s more, agreement among all contributors is not required. Given that contributors have different business models (some may be vertically integrated, while others focus on technology development and licensing), it is difficult to imagine all of them coming to a consensus. The regulation would appear to allow different contributors to jointly notify their views on the aggregate royalty. This may add even more confusion to standard implementers. For example, some contributors could announce an aggregate rate of $10 per product, another 5% of the end-product price, while a third group would prefer a lower $1 per-product rate. In practice, the announcements of aggregate royalty rates may be meaningless.
Patent Essentiality Is Not the Same as Patent Infringement, Validity, or Value
The regulation also proposes to assess the essentiality of patents declared essential for a standard. It is hoped that this would improve transparency in the SEP landscape and help implementers assess with whom they need to license. For an implementer, however, it is important not only to know whether patents are essential for a standard, but also whether it infringes SEPs with its products and whether SEPs are valid.
A patent may be essential to a standard but not infringed by a concrete product. For example, a patent owner may have a 4G SEP that reads on base stations, but an implementer may manufacture and sell smartphones and thus does not infringe the relevant 4G SEP. Or a patent owner may hold SEPs that claim optional features of a standard, while an implementer may only use the standard’s mandatory features in its products. A study of U.S. SEP litigation found that SEPs were held to be infringed in only 30.7% of cases. In other words, in 69.3% of cases, an SEP was not considered to be infringed by accused products.
A patent may also be essential but invalid. Courts have the final say on whether granted patents fulfill patentability requirements. In the Unwired Planet v Huawei litigation in the UK, the court found two asserted patents valid, essential, and infringed, and two patents invalid.
Essentiality is, therefore, just one piece of the puzzle. Even if parties would accept the nonbinding essentiality determination (which is not guaranteed), they can still disagree over matters of infringement and validity. Essentiality checks are not a silver bullet that would eliminate all disputes.
Essentiality also should not be equated with the patent’s value. Not all patents are created equal. Some SEPs are related to breakthrough or core inventions, while others may be peripheral or optional. Economists have long found that the economic value of patents is highly skewed. Only a relatively small number of patents provide most of the value.
How Accurate and Reliable Is Sampling for Essentiality Assessments?
The leaked regulation provides that, every year, the EUIPO shall select a sample of claimed SEPs from each SEP owner, as well as from each specific standard, for essentiality checks. The Commission would adopt the precise methodology to ensure a fair and statistically valid selection that can produce sufficiently accurate results. Each SEP owner may also propose up to 100 claimed SEPs to be checked for essentiality for each specific standard.
The apparent goal of the samples is to reduce the costs of essentiality assessments. Analyzing essentiality is not a simple task. It takes time and money to produce accurate and reliable results. A thorough review of essentiality by patent pools was estimated to cost up to €10,000 and to last two to three days. Another study spent 40-50 working hours preparing claim charts that are used in essentiality assessments. If we consider that the EUIPO would potentially be directed to assess the essentiality of thousands of standards, it is easy to see how these costs could skyrocket and render the task impossible.
The use of samples is not without concerns. It inevitably introduces certain margins of error. Keith Mallinson has suggested that a sample size must be very large and include thousands of patents if any meaningful results are to be reached. It is therefore questionable why SEP owners would be limited to checking only 100 patents. Unless a widely accepted method to assess a large portfolio of declared patents were to be found, the results of these essentiality assessments would likely be imprecise and unreliable, and therefore fall far short of the goal of increased transparency.
The Dangers of a Top-Down Approach and Patent Counting for Royalty Determinations
Concealed in the regulation is the possibility that the EUIPO could use a top-down approach for royalty determinations, which provides that the SEP owner should receive a proportional share in the total aggregate royalty of a standard. It requires:
Establishing a cumulative royalty for a standard; and then
Calculating the share in the total royalty to an individual SEP owner.
Now we can see why the aggregate rate becomes important. The regulation would allow EUIPO to set up a panel of three conciliators to provide a nonbinding expert opinion on the aggregate royalty rate (in addition to, or regardless of, the rates already announced by contributors). Essentiality checks are also needed to filter out which patents are truly essential, and the number can be used to assess the individual share of SEP owners.
A detailed analysis of this top-down approach exceeds the scope of this post, but here are the key points:
The approach relies on patent counting, treating every patent as having the same value. We have seen that this is not the case, and that value is, instead, highly skewed. Moreover, essential patents may be invalid or not infringed by specific devices, which is not factored into the top-down calculations.
The top-down approach is not used in commercial-licensing negotiations, and courts have frequently rejected its application. Industry practice is to use comparable licensing agreements. The top-down approach was used in Unwired Planet v Huawei only as a cross-check for the rates derived from comparable agreements. TCL v Ericsson relied on this method, but was vacated on appeal. The most recent Interdigital v Lenovo judgment considered and rejected its use, finding “no value in Interdigital’s Top-Down cross-check in any of its guises.”
Fundamentally, the EUIPO’s top-down approach would be tantamount to direct government regulation of technology prices. So far, there are no studies suggesting that something is wrong with the level of royalties that might require government intervention. In fact, studies point to the opposite: prices are falling over time.
As discussed, the regulation provides an elaborate notification system of standards and declared SEPs, essentiality checks, and aggregate and individual royalty-rate determinations. Even with all these data points, however, it is not clear that it would help with licensing. Parties may not accept them and may still end up in court.
Recent experience from the automotive sector demonstrates that knowing the essentiality and the price of SEPs did not translate into smoother licensing. Avanci is a platform that gathers almost all SEP owners for licensing 2G, 3G, and 4G SEPs to car manufacturers. It was intended to provide a one-stop-shop to licensees by offering a single price for the large portfolio of SEPs. All patents included in the Avanci platform were independently tested for essentiality. Avanci, however, was faced with the reluctance of implementers to take a licence. Only after litigating and prevailing did Avanci succeed in licensing the majority of the market.
Paradoxically, the most innovative companies—the one that invest in the research and development of several different standardized solutions and rely on technology licensing as their business model—will bear the brunt of the regulation. It pays off, ironically, to be a user of standardized technology, rather than the innovator.
The introduction of such elaborate government regulation of SEP licensing also has important international ramifications. It is easy to imagine that other countries might not be so thrilled with European regulators setting the aggregate rate for international standards and individual rates for their companies’ portfolios. China, in particular, might see it as an example and set up its own centralized agencies for royalty determinations. What may happen if European, Chinese, or some other regulators come up with different aggregate and individual royalty rates? The whole international standardization system could crumble.
In short, the regulation imposes significant costs on SEP owners that innovate and contribute their technologies to international standardization. Faced with excessive costs and overregulation, companies may abandon open and collaborative international standardization, based on FRAND licensing, and instead work on proprietary solutions in smaller industry groups. This would allow them to escape the ambit of EU regulation. Whether this is a better alternative is up for debate.
The European Commission on March 27 showered the public with a series of documents heralding a new, more interventionist approach to enforce Article 102 of the Treaty on the Functioning of the European Union (TFEU), which prohibits “abuses of dominance.” This new approach threatens more aggressive, less economically sound enforcement of single-firm conduct in Europe.
EU courts may eventually constrain the Commission’s overreach in this area somewhat, but harmful business uncertainty will be the near-term reality. What’s more, the Commission’s new approach may unfortunately influence U.S. states that are considering European-style abuse-of-dominance amendments to their own substantive antitrust laws. As such, market-oriented U.S. antitrust commentators will need to be even more vigilant in keeping tabs of—and, where necessary, promptly critiquing—economically problematic shifts in European antitrust-enforcement policy.
The Commission’s Emerging Reassessment of Abuses of Dominance
In a press release summarizing its new initiative, the Commission made a “call for evidence” to obtain feedback on the adoption of first-time guidelines on exclusionary abuses of dominance under Article 102 TFEU.
In parallel, the Commission also published a “communication” announcing amendments to its 2008 guidance on enforcement priorities in challenging abusive exclusionary conduct. According to the press release, until final Article 102 guidelines are approved, this guidance “provides certain clarifications on its approach to determine whether to pursue cases of exclusionary conduct as a matter of priority.” An annex to the communication sets forth specific amendments to the 2008 guidance.
Finally, the Commission also released a competition policy brief (“a dynamic and workable effects-based approach to the abuse of dominance”) that discusses the policy justifications for the changes enumerated in the annex.
In short, the annex “toughens” the approach to abuse of dominance enforcement in five ways:
It takes a broader view of what constitutes “anticompetitive foreclosure.” The Annex rejects the 2008 guidance’s emphasis on profitability (cases where a dominant firm can profitably maintain supracompetitive prices or profitably influence other parameters of competition) as key to prioritizing matters for enforcement. Instead, a new, far less-demanding prosecutorial standard is announced, one that views anticompetitive foreclosure as a situation “that allow[s] the dominant undertaking to negatively influence, to its own advantage and to the detriment of consumers, the various parameters of competition, such as price, production, innovation, variety or quality of goods or services.” Under this new approach, highly profitable competition on the merits (perhaps reflecting significant cost efficiencies) might be challenged, say, merely because enforcers were dissatisfied with a dominant firm’s particular pricing decisions, or the quality, variety, and “innovativeness” of its output. This would be a recipe for bureaucratic micromanagement of dominant firms’ business plans by competition-agency officials. The possibilities for arbitrary decision making by those officials, who may be sensitive to the interests of politically connected rent seekers (say, less-efficient competitors) are obvious.
The annex diminishes the importance of economic efficiency in dominant-firm analysis. The Commission’s 2008 guidance specified that Commission enforcers “would generally intervene where the conduct concerned has already been or is capable of hampering competition from competitors that are considered to be as efficient as the dominant undertaking.” The revised 2023 guidance “recognizes that in certain circumstances a less efficient competitor should be taken into account when considering whether particular price-based conduct leads to anticompetitive foreclosure.” This amendment plainly invites selective-enforcement actions to assist less-efficient competitors, placing protection of those firms above consumer-welfare maximization. In order to avoid liability, dominant firms may choose to raise their prices or reduce their investments in cost-reducing innovations, so as to protect a relatively inefficient competitive fringe. The end result would be diminished consumer welfare.
The annex encourages further micromanagement of dominant-firm pricing and other business decisions. Revised 2023 guidance invites the Commission to “examine economic data relating to prices” and to possible below-cost pricing, in considering whether a hypothetical as-efficient competitor would be foreclosed. Relatedly, the Commission encourages “taking into account other relevant quantitative and/or qualitative evidence” in determining whether an as-efficient competitor can compete “effectively” (emphasis added). This focus on often-subjective criteria such as “qualitative” indicia and the “effectiveness” of competition could subject dominant firms to costly new business-planning uncertainty. Similarly, the invitation to enforcers to “examine” prices may be viewed as a warning against “overaggressive” price discounting that would be expected to benefit consumers.
The annex imposes new constraints on a firm’s decision as to whether or not to deal (beneficial voluntary exchange, an essential business freedom that underlies our free-market system – see here, for example). A revision to the 2008 guidance specifies that, “[i]n situations of constructive refusal to supply (subjecting access to ‘unfair conditions’), it is not appropriate to pursue as a matter of priority only cases concerning the provision of an indispensable input or the access to an essential facility.” This encourages complaints to Brussels enforcers by scores ofcompanies that are denied an opportunity to deal with a dominant firm, due to “unfairness.” This may be expected to substantially undermine business efficiency, as firms stuck with the “dominant” label are required to enter into suboptimal supply relationships. Dynamic efficiency will also suffer, to the extent that intellectual-property holders are required to license on unfavorable terms (a reality that may be expected to diminish dominant firms’ incentives to invest in innovative activities).
The annex threatens to increase the number of Commission “margin-squeeze” cases, whereby vertically integrated firms are required to offer favorable sales terms to, and thereby prop up, wholesalers who want to “compete” with them at retail. (See here for a more detailed discussion of the margin-squeeze concept.) The current standard for margin-squeeze liability already is far narrower in the United States than in Europe, due to the U.S. Supreme Court’s decision in linkLine (2009).
Specifically, the annex announces margin-squeeze-related amendments to the 2008 guidance. The amendments aim to clarify that “it is not appropriate to pursue as a matter of priority margin squeeze cases only where those cases involve a product or service that is objectively necessary to be able to compete effectively on the downstream market.” This extends margin-squeeze downstream competitor-support obligations far beyond regulated industries; how far, only time will tell. (See here for an economic study indicating that even the Commission’s current less-intrusive margin-squeeze policy undermines consumer welfare.) The propping up of less-efficient competitors may, of course, be facilitated by having the dominant firm take the lead in raising retail prices, to ensure that the propped-up companies get “fair margins.” Such a result diminishes competitive vigor and (once again) directly harms consumers.
In sum, through the annex’s revisions to the 2008 guidance, the Commission has, without public comment (and well prior to the release of new first-time guidelines), taken several significant steps that predictably will reduce competitive vitality and harm consumers in those markets where “dominant firms” exist. Relatedly, of course, to the extent that innovative firms respond to incentives to “pull their punches” so as not to become dominant, dynamic competition will be curtailed. As such, consumers will suffer, and economic welfare will diminish.
How Will European Courts Respond?
Fortunately, there is a ray of hope for those concerned about the European Commission’s new interventionist philosophy regarding abuses of dominance. Although the annex and the related competition policy brief cite a host of EU judicial decisions in support of revisions to the guidance, their selective case references and interpretations of judicial holdings may be subject to question. I leave it to EU law experts (I am not one) to more thoroughly parse specific judicial opinions cited in the March 27 release. Nevertheless, it seems to me that the Commission may face some obstacles to dramatically “stepping up” its abuse-of-dominance enforcement actions along the lines suggested by the annex.
A number of relatively recent judicial decisions underscore the concerns that EU courts have demonstrated regarding the need for evidentiary backing and economic analysis to support the Commission’s findings of anticompetitive foreclosure. Let’s look at a few.
In Intel v. Commission (2017), the European Court of Justice (ECJ) held that the Commission had failed to adequately assess whether Intel’s conditional rebates on certain microprocessors were capable of restricting competition on the basis of the “as-efficient competitor” (AEC) test, and referred the case back to the General Court. The ECJ also held that the balancing of the favorable and unfavorable effects of Intel’s rebate practice could only be carried out after an analysis of that practice’s ability to exclude at least as-efficient-competitors.
In 2022, on remand, the General Court annulled the Commission’s determination (thereby erasing its 1.06 billion Euro fine) that Intel had abused its dominant position. The Court held that the Commission’s failure to respond to Intel’s argument that the AEC test was flawed, coupled with the Commission’s errors in its analysis of contested Intel practices, meant that the “analysis carried out by the Commission is incomplete and, in any event, does not make it possible to establish to the requisite legal standard that the rebates at issue were capable of having, or were likely to have, anticompetitive effects.”
In Unilever Italia (2023), the ECJ responded to an Italian Council of State request for guidance in light of the Italian Competition Authority’s finding that Unilever had abused its dominant position through exclusivity clauses that covered the distribution of packaged ice cream in Italy. The court found that a competition authority is obliged to assess the actual capacity to exclude by taking into account evidence submitted by the dominant undertaking (in this case, the Italian Authority had failed to do so). The ECJ stated that its 2017 clarification of rebate-scheme analysis in Intel also was applicable to exclusivity clauses.
Finally, in Qualcomm v. Commission (2022), the General Court set aside a 2018 Commission decision imposing a 1 billion Euro fine on Qualcomm for abuse of a dominant position in LTE chipsets. The Commission contended that Qualcomm’s 2011-2016 incentive payments to Apple for exclusivity reduced Apple’s incentive to shift suppliers and had the capability to foreclose Qualcomm’s competitors from the LTE-chipset market. The court found massive procedural irregularities by the Commission and held that the Commission had not shown that Qualcomm’s payments either had foreclosed or were capable of foreclosing competitors. The Court concluded that the Commission had seriously erred in the evidence it relied upon, and in its failure to take into account all relevant factors, as required under the 2022 Intel decision.
These decisions are not, of course, directly related to the specific changes announced in the annex. They do, however, raise serious questions about how EU judges will view new aggressive exclusionary-conduct theories based on amendments to the 2008 guidance. In particular, EU courts have signaled that they will:
closely scrutinize Commission fact-finding and economic analysis in evaluating exclusionary-abuse cases;
require enforcers to carefully weigh factual and economic submissions put forth by dominant firms under investigation;
require that enforcers take economic-efficiency arguments seriously; and
continue to view the “as-efficient competitor” concept as important, even though the Commission may seek to minimize the test’s significance.
In other words, in the EU, as in the United States, reviewing courts may “put a crimp” in efforts by national competition agencies to read case law very broadly, so as to “rein in” allegedly abusive dominant-firm conduct. In jurisdictions with strong rule-of-law traditions, enforcers propose but judges dispose. The kicker, however, is that judicial review takes time. In the near term, firms will have to absorb additional business-uncertainty costs.
What About the States?
“Monopolization”—rather than the European “abuse of a dominant position”—is, of course, the key single-firm conduct standard under U.S. federal antitrust law. But the debate over the Commission’s abuse-of-dominance standards nonetheless is significant to domestic American antitrust enforcement.
Under U.S. antitrust federalism, the individual states are empowered to enact antitrust legislation that goes beyond the strictures of federal antitrust law. Currently, several major states—New York, Pennsylvania, and Minnesota—are considering antitrust bills that would add abuse of a dominant position as a new state antitrust cause of action (see here, here, here, and here). What’s more, the most populous U.S. state, California, may also consider similar legislation (see here). Such new laws would harmfully undermine consumer welfare (see my commentary here).
If certain states enacted a new abuse-of-dominance standard, it would be natural for their enforcers to look to EU enforcers (with their decades of relevant experience) for guidance in the area. As such, the annex (and future Commission guidelines, which one would expect to be consistent with the new annex guidance) could prove quite influential in promoting highly interventionist state policies that reach far beyond federal monopolization standards.
What’s worse, federal judicial case law that limits the scope of Sherman Act monopolization cases would have little or no influence in constraining state judges’ application of any new abuse-of-dominance standards. It is questionable that state judges would feel themselves empowered or even capable of independently applying often-confusing EU case law regarding abuse of dominance as a possible constraint on state officials’ prosecutions.
The Commission’s emerging guidance on abuse of dominance is bad for consumers and for competition. EU courts may constrain some Commission enforcement excesses, but that will take time, and new short-term business uncertainty costs are likely.
Moreover, negative effects may eventually also be felt in the United States if states enact proposed abuse-of-dominance prohibitions and state enforcers adopt the European Commission’s interventionist philosophy. State courts, applying an entirely new standard not found in federal law, should not be expected to play a significant role in curtailing aggressive state prosecutions for abuse of dominance.
Promoters of principled, effects-based, economics-centric antitrust enforcement should take heed. They must be prepared to highlight the ramifications of both foreign and state-level initiatives as they continue to advocate for market-based antitrust policies. Sound law & economics training for state enforcers and judges likely will become more important than ever.
The FTC claims that this vertical merger would stifle competition and innovation in the U.S. market for life-saving cancer tests. The FTC’s decision ignores Illumina’s ability to use its resources to obtain regulatory clearances and bring GRAIL’s test to market more quickly, thereby saving many future lives. Other benefits of the transaction, including the elimination of double marginalization, have been succinctly summarized by Thom Lambert. See also the outstanding critique of the FTC’s case by Bruce Kobayashi, Jessica Melugin, Kent Lassman, and Timothy Muris, and this update by Dan Gilman.
The transaction’s potential boon to consumers and patients has, alas, been sacrificed at the altar of theoretical future harms in a not-yet-existing MCED market, and ignores Illumina’s proffered safeguards (embodied in contractual assurances) that it would make its platform available to third parties in a neutral fashion.
The FTC’s holding comes in tandem with a previous European Commission holding to prohibit Illumina’s acquisition of GRAIL and impose a large fine. These two decisions epitomize antitrust enforcement policy at its worst: the sacrifice of clear and substantial near-term welfare benefits to consumers (including lives saved!) based on highly questionable future harms that cannot be reasonably calibrated at this time. A federal appeals court should quickly and decisively overturn this problematic FTC holding, and a European tribunal should act in similar fashion.
The courts cannot, of course, undo the harm flowing from delays in moving GRAIL’s technology forward. This is a sad day for believers in economically sound, evidence-based antitrust enforcement, as well as for patients and consumers.
Spring is here, and hope springs eternal in the human breast that competition enforcers will focus on welfare-enhancing initiatives, rather than on welfare-reducing interventionism that fails the consumer welfare standard.
Fortuitously, on March 27, the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) are hosting an international antitrust-enforcement summit, featuring senior state and foreign antitrust officials (see here). According to an FTC press release, “FTC Chair Lina M. Khan and DOJ Assistant Attorney General Jonathan Kanter, as well as senior staff from both agencies, will facilitate discussions on complex challenges in merger and unilateral conduct enforcement in digital and transitional markets.”
I suggest that the FTC and DOJ shelve that topic, which is the focus of endless white papers and regular enforcement-oriented conversations among competition-agency staffers from around the world. What is there for officials to learn? (Perhaps they could discuss the value of curbing “novel” digital-market interventions that undermine economic efficiency and innovation, but I doubt that this important topic would appear on the agenda.)
Rather than tread familiar enforcement ground (albeit armed with novel legal theories that are known to their peers), the FTC and DOJ instead should lead an international dialogue on applying agency resources to strengthen competition advocacy and to combat anticompetitive market distortions. Such initiatives, which involve challenging government-generated impediments to competition, would efficiently and effectively promote the Biden administration’s “whole of government” approach to competition policy.
[C]ompetition may be lessened significantly by various public policies and institutional arrangements as well [as by private restraints]. Indeed, private restrictive business practices are often facilitated by various government interventions in the marketplace. Thus, the mandate of the competition office extends beyond merely enforcing the competition law. It must also participate more broadly in the formulation of its country’s economic policies, which may adversely affect competitive market structure, business conduct, and economic performance. It must assume the role of competition advocate, acting proactively to bring about government policies that lower barriers to entry, promote deregulation and trade liberalization, and otherwise minimize unnecessary government intervention in the marketplace.
Competition advocacy, broadly, is the use of FTC expertise in competition, economics, and consumer protection to persuade governmental actors at all levels of the political system and in all branches of government to design policies that further competition and consumer choice. Competition advocacy often takes the form of letters from the FTC staff or the full Commission to an interested regulator, but also consists of formal comments and amicus curiae briefs.
Cooper, Pautler, & Zywicki also provided guidance—derived from an evaluation of FTC public-interest interventions—on how advocacy initiatives can be designed to maximize their effectiveness.
During the Trump administration, the FTC’s Economic Liberty Task Force shone its advocacy spotlight on excessive state occupational-licensing restrictions that create unwarranted entry barriers and distort competition in many lines of work. (The Obama administration in 2016 issued a report on harms to workers that stem from excessive occupational licensing, but it did not accord substantial resources to advocacy efforts in this area.)
ACMDs refer to government-imposed restrictions on competition. These distortions may take the form of distortions of international competition (trade distortions), distortions of domestic competition, or distortions of property-rights protection (that with which firms compete). Distortions across any of these pillars could have a negative effect on economic growth. (See here.)
Because they enjoy state-backed power and the force of law, ACMDs cannot readily be dislodged by market forces over time, unlike purely private restrictions. What’s worse, given the role that governments play in facilitating them, ACMDs often fall outside the jurisdictional reach of both international trade laws and domestic competition laws.
reduce the incentive of suppliers to compete; and that
limit the choices and information available to consumers.
When those categories explicitly or implicitly favor domestic enterprises over foreign enterprises, they may substantially distort international trade and investment decisions, to the detriment of economic efficiency and consumer welfare in multiple jurisdictions.
Given the non-negligible extraterritorial impact of many ACMDs, directing the attention of foreign competition agencies to the ACMD problem would be a particularly efficient use of time at gatherings of peer competition agencies from around the world. Peer competition agencies could discuss strategies to convince their governments to phase out or limit the scope of ACMDs.
The collective action problem that may prevent any one jurisdiction from acting unilaterally to begin dismantling its ACMDs might be addressed through international trade negotiations (perhaps, initially, plurilateral negotiations) aimed at creating ACMD remedies in trade treaties. (Shanker Singham has written about crafting trade remedies to deal with ACMDs—see here, for example.) Thus, strategies whereby national competition agencies could “pull in” their fellow national trade agencies to combat ACMDs merit exploration. Why not start the ball rolling at next week’s international antitrust-enforcement summit? (Hint, why not pull in a bunch of DOJ and FTC economists, who may feel underappreciated and underutilized at this time, to help out?)
If the Biden administration truly wants to strengthen the U.S. economy by bolstering competitive forces, the best way to do that would be to reallocate a substantial share of antitrust-enforcement resources to competition-advocacy efforts and the dismantling of ACMDs.
In order to have maximum impact, such efforts should be backed by a revised “whole of government” initiative – perhaps embodied in a new executive order. That new order should urge federal agencies (including the “independent” agencies that exercise executive functions) to cooperate with the DOJ and FTC in rooting out and repealing anticompetitive regulations (including ACMDs that undermine competition by distorting trade flows).
The DOJ and FTC should also be encouraged by the executive order to step up their advocacy efforts at the state level. The Office of Management and Budget (OMB) could be pulled in to help identify ACMDs, and the U.S. Trade Representative’s Office (USTR), with DOJ and FTC economic assistance, could start devising an anti-ACMD negotiating strategy.
In addition, the FTC and DOJ should directly urge foreign competition agencies to engage in relatively more competition advocacy. The U.S. agencies should simultaneously push to make competition-advocacy promotion a much higher International Competition Network priority (see here for the ICN Advocacy Working Group’s 2022-2025 Work Plan). The FTC and DOJ could simultaneously encourage their competition-agency peers to work with their fellow trade agencies (USTR’s peer bureaucracies) to devise anti-ACMD negotiating strategies.
These suggestions may not quite be ripe for meetings to be held in a few days. But if the administration truly believes in an all-of-government approach to competition, and is truly committed to multilateralism, these recommendations should be right up its alley. There will be plenty of bilateral and plurilateral trade and competition-agency meetings (not to mention the World Bank, OECD, and other multilateral gatherings) in the next year or so at which these sensible, welfare-enhancing suggestions could be advanced. After all, “hope springs eternal in the human breast.”
[This is a guest post from Mario Zúñiga of EY Law in Lima, Perú. An earlier version was published in Spanish on the author’s personal blog. He gives thanks to Hugo Figari and Walter Alvarez for their comments on the initial version and special thanks to Lazar Radic for his advice and editing of the English version.]
There is a line of thinking according to which, without merger-control rules, antitrust law is “incomplete.” Without such a regime, the argument goes, whenever a group of companies faces with the risk of being penalized for cartelizing, they could instead merge and thus “raise prices without any legal consequences.”
A few months ago, at a symposium that INDECOPI organized for the first anniversary the Peruvian Merger Control Act’s enactment, Rubén Maximiano of the OECD’s Competition Division argued in support of the importance of merger-control regimes with the assessment that mergers are“like the ultimate cartel” because a merged firm could raise prices “with impunity.”
I get Maximiano’s point. Antitrust law was born, in part, to counter the rise of trusts, which had been used to evade the restriction that common law already imposed on “restraints of trade” in the United States. Let’s not forget, however, that these “trusts” were essentially a facade used to mask agreements to fix prices, and only to fix prices. They were not real combinations of two or more businesses, as occurs in a merger. Therefore, even if one agree that it is important to scrutinize mergers, describing them as an alternative means of “cartelizing” is, to say the least, incomplete.
While this might seem to some to be a debate about mere semantics, I think is relevant to the broader context in which competition agencies are being pushed from various fronts toward a more aggressive application of merger-control rules.
In describing mergers only as a strategy to gain more market power, or market share, or to expand profit margins, we would miss something very important: how these benefits would be obtained. Let’s not forget what the goal of antitrust law actually is. However we articulate this goal (“consumer welfare” or “the competitive process”), it is clear that antitrust law is more concerned with protecting a process than achieving any particular final result. It protects a dynamic in which, in principle, the market is trusted to be the best way to allocate resources.
In that vein, competition policy seeks to remove barriers to this dynamic, not to force a specific result. In this sense, it is not just what companies achieve in the market that matters, but how they achieve it. And there’s an enormous difference between price-fixing and buying a company. That’s why antitrust law gives a different treatment to “naked” agreements to collude while also contemplating an “ancillary agreements” doctrine.
By accepting this (“ultimate cartel”) approach to mergers, we would also be ignoring decades of economics and management literature. We would be ignoring, to start, the fundamental contributions of Ronald Coase in “The Nature of the Firm.” Acquiring other companies (or business lines or assets) allows us to reduce transaction costs and generate economies of scale in production. According to Coase:
The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of ‘organising’ production through the price mechanism is that of discovering what the relevant prices are. This cost may be reduced but it will not be eliminated by the emergence of specialists who will sell this information. The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.
The simple answer to that could be to enter into long-term contracts, but Coase notes that that’s not that easy. He explains that:
There are, however, other disadvantages-or costs of using the price mechanism. It may be desired to make a long-term contract for the supply of some article or service. This may be due to the fact that if one contract is made for a longer period, instead of several shorter ones, then certain costs of making each contract will be avoided. Or, owing to the risk attitude of the people concerned, they may prefer to make a long rather than a short-term contract. Now, owing to the difficulty of forecasting, the longer the period of the contract is for the supply of the commodity or service, the less possible, and indeed, the less desirable it is for the person purchasing to specify what the other contracting party is expected to do.
Coase, to be sure, makes this argument mainly with respect to vertical mergers, but I think it may be applicable to horizontal mergers, as well, to the extent that the latter generate “economies of scale.” Moreover, it’s not unusual for many acquisitions that are classified as “horizontal” to also have a “vertical” component (e.g., a consumer-goods company may buy another company in the same line of business because it wants to take advantage of the latter’s distribution network; or a computer manufacturer may buy another computer company because it has an integrated unit that produces microprocessors).
We also should not leave aside the entrepreneurship element, which frequently is ignored in the antitrust literature and in antitrust law and policy. As Israel Kirzner pointed out more than 50 years ago:
An economics that emphasizes equilibrium tends, therefore, to overlook the role of the entrepreneur. His role becomes somehow identified with movements from one equilibrium position to another, with ‘innovations,’ and with dynamic changes, but not with the dynamics of the equilibrating process itself.
Instead of the entrepreneur, the dominant theory of price has dealt with the firm, placing the emphasis heavily on its profit-maximizing aspects. In fact, this emphasis has misled many students of price theory to understand the notion of the entrepreneur as nothing more than the focus of profit-maximizing decision-making within the firm. They have completely overlooked the role of the entrepreneur in exploiting superior awareness of price discrepancies within the economic system.”
Working in mergers and acquisitions, either as an external advisor or in-house counsel, has confirmed the aforementioned for me (anecdotal evidence, to be sure, but with the advantage of allowing very in-depth observations). Firms that take control of other firms are seeking to exploit the comparative advantages they may have over whoever is giving up control. Sometimes a company has (or thinks it has) knowledge or assets (greater knowledge of the market, better sales strategies, a broader distribution network, better access to credit, among many other potential advantages) that allow it to make better use of the seller’s existing assets.
An entrepreneur is successful because he or she sees what others do not see. Beatriz Boza summarizes it well in a section of her book “Empresarios” in which she details the purchase of the Santa Isabel supermarket chain by Intercorp (one of Peru’s biggest conglomerates). The group’s main shareholder, Carlos Rodríguez-Pastor, had already decided to enter the retail business and the opportunity came in 2003 when the Dutch group Ahold put Santa Isabel up for sale. The move was risky for Intercorp, in that Santa Isabel was in debt and operating at a loss. But Rodríguez-Pastor had been studying what was happening similar markets in other countries and knew that having a stake in the supermarket business would allow him to reach more consumer-credit customers, in addition to offering other vertical-integration opportunities. In retrospect, the deal can only be described as a success. In 2014, the company reached 34.1% market share and took in revenues of more than US$1.25 billion, with an EBITDA margin of 6.2%. Rodríguez-Pastor saw the synergies that others did not see, but he also dared to take the risk. As Boza writes:
‘Nobody ever saw the synergies,’ concludes the businessman, reminding the businessmen and executives who warned him that he was going to go bankrupt after the acquisition of Ahold’s assets. ‘Today we have a retail circuit that no one else can have.’
Competition authorities need to recognize these sorts of synergies and efficiencies, and take them into account as compensating effects even where the combination might otherwise represent some risk to competition. That is why the vast majority of proposed mergers are approved by competition authorities around the world.
There is some evidence of companies that were sanctioned in cartel cases later choose to merge, but what this requires is that the competition authorities put more effort into prosecuting those mergers, not that they adopt a much more aggressive approach to reviewing all mergers.
I am not proposing, of course, that we should abolish merger control or even that it should necessarily be “permissive.” Some mergers may indeed represent a genuine risk to competition. But in analyzing them, employing technical analytic techniques and robust evidence, it is important to recognize that entrepreneurs may have countless valid business reasons to carry out a merger—reasons that are often not fully formalized or even understood by the entrepreneurs themselves, since they operate under a high degree of uncertainty and risk. An entrepreneur’s primary motivation is to maximize his or her own benefit, but we cannot just assume that this will be greater after “concentrating” markets.
Competition agencies must recognize this, and not simply presume anticompetitive intentions or impacts. Antitrust law—and, in particular, the concentration-control regimes throughout the world—require that any harm to competition must be proved, and this is so precisely because mergers are not like cartels.
 The debate prior to the enactment of Peru’s Merger Control Act became too politicized and polarized. Opponents went so far as to affirm that merger control was “unconstitutional” (highly debatable) or that it constituted an interventionist policy (something that I believe cannot be assumed but is contingent on the type of regulation that is approved or how it is applied). On the other hand, advocates of the regulation claimed an inevitable scenario of concentrated markets and monopolies if the act was not approved (without any empirical evidence of this claim). My personal position was initially skeptical, considering that the priority—from a competition policy point of view, at least in a developing economy like Peru—should continue to be deregulation to remove entry barriers and to prosecute cartels. That being said, a well-designed and well-enforced merger-control regime (i.e., one that generally does not block mergers that are not harmful to competition; is agile; and has adequate protection from political interference) does not have to be detrimental to markets and can generate benefits in terms of avoiding anti-competitive mergers.
In Peru, the Commission for the Defense of Free Competition and its Technical Secretariat have been applying the law pretty reasonably. To date, of more than 20 applications, the vast majority have been approved without conditions, and one conditionally. In addition, approval requests have been resolved in an average of 23 days, below the legal term.
 See, e.g., this peer-reviewed 2018 OECD report: “The adoption of a merger control regime should be a priority for Peru, since in its absence competitors can circumvent the prohibition against anticompetitive agreements by merging – with effects potentially similar to those of a cartel immune from antitrust scrutiny.”
 National Institute for the Defense of Competition and the Protection of Intellectual Property (INDECOPI, after its Spanish acronym), is the Peruvian competition agency. It is an administrative agency with a broad scope of tasks, including antitrust law, unfair competition law, consumer protection, and intellectual property registration, among others. It can adjudicate cases and impose fines. Its decisions can be challenged before courts.
 You can watch the whole symposium (which I recommend)here.
 See Gregory J. Werden’s “The Foundations of Antitrust.” Werden explains how the term “trust” had lost its original legal meaning and designated all kinds of agreements intended to restrict competition.
Below, I summarize some of the forum’s noteworthy takeaways, followed by concluding comments on the current state of the antitrust enterprise, as reflected in forum panelists’ remarks.
The consumer welfare standard is neither a recent nor an arbitrary antitrust-enforcement construct, and it should not be abandoned in order to promote a more “enlightened” interventionist antitrust.
George Mason University’s Donald Boudreaux emphasized in his introductory remarks that the standard goes back to Adam Smith, who noted in “The Wealth of Nations” nearly 250 years ago that the appropriate end of production is the consumer’s benefit. Moreover, American Antitrust Institute President Diana Moss, a leading proponent of more aggressive antitrust enforcement, argued in standalone remarks against abandoning the consumer welfare standard, as it is sufficiently flexible to justify a more interventionist agenda.
The purported economic justifications for a far more aggressive antitrust-enforcement policy on mergers remain unconvincing.
Moss’ presentation expressed skepticism about vertical-merger efficiencies and called for more aggressive challenges to such consolidations. But Boudreaux skewered those arguments in a recent four-point rebuttal at Café Hayek. As he explains, Moss’ call for more vertical-merger enforcement ignores the fact that “no one has stronger incentives than do the owners and managers of firms to detect and achieve possible improvements in operating efficiencies – and to avoid inefficiencies.”
Moss’ complaint about chronic underenforcement mistakes by overly cautious agencies also ignores the fact that there will always be mistakes, and there is no reason to believe “that antitrust bureaucrats and courts are in a position to better predict the future [regarding which efficiencies claims will be realized] than are firm owners and managers.” Moreover, Moss provided “no substantive demonstration or evidence that vertical mergers often lead to monopolization of markets – that is, to industry structures and practices that harm consumers. And so even if vertical mergers never generate efficiencies, there is no good argument to use antitrust to police such mergers.”
And finally, Boudreaux considers Moss’ complaint that a court refused to condemn the AT&T-Time Warner merger, arguing that this does not demonstrate that antitrust enforcement is deficient:
[A]s soon as the . . . merger proved to be inefficient, the parties themselves undid it. This merger was undone by competitive market forces and not by antitrust! (Emphasis in the original.)
The agencies, however, remain adamant in arguing that merger law has been badly unenforced. As such, the new leadership plans to charge ahead and be willing to challenge more mergers based on mere market structure, paying little heed to efficiency arguments or actual showings of likely future competitive harm.
In her afternoon remarks at the forum, Principal Deputy Assistant U.S. Attorney General for Antitrust Doha Mekki highlighted five major planks of Biden administration merger enforcement going forward.
Clayton Act Section 7 is an incipiency statute. Thus, “[w]hen a [mere] change in market structure suggests that a firm will have an incentive to reduce competition, that should be enough [to justify a challenge].”
“Once we see that a merger may lead to, or increase, a firm’s market power, only in very rare circumstances should we think that a firm will not exercise that power.”
A structural presumption “also helps businesses conform their conduct to the law with more confidence about how the agencies will view a proposed merger or conduct.”
Efficiencies defenses will be given short shrift, and perhaps ignored altogether. This is because “[t]he Clayton Act does not ask whether a merger creates a more or less efficient firm—it asks about the effect of the merger on competition. The Supreme Court has never recognized efficiencies as a defense to an otherwise illegal merger.”
Merger settlements have often failed to preserve competition, and they will be highly disfavored. Therefore, expect a lot more court challenges to mergers than in recent decades. In short, “[w]e must be willing to litigate. . . . [W]e need to acknowledge the possibility that sometimes a court might not agree with us—and yet go to court anyway.”
Mekki’s comments suggest to me that the soon-to-be-released new draft merger guidelines may emphasize structural market-share tests, generally reject efficiencies justifications, and eschew the economic subtleties found in the current guidelines.
The agencies—and the FTC, in particular—have serious institutional problems that undermine their effectiveness, and risk a loss of credibility before the courts in the near future.
In his address to the forum, former FTC Chairman Bill Kovacic lamented the inefficient limitations on reasoned FTC deliberations imposed by the Sunshine Act, which chills informal communications among commissioners. He also pointed to our peculiarly unique global status of having two enforcers with duplicative antitrust authority, and lamented the lack of policy coherence, which reflects imperfect coordination between the agencies.
Perhaps most importantly, Kovacic raised the specter of the FTC losing credibility in a possible world where Humphrey’s Executor is overturned (see here) and the commission is granted little judicial deference. He suggested taking lessons on policy planning and formulation from foreign enforcers—the United Kingdom’s Competition and Markets Authority, in particular. He also decried agency officials’ decisions to belittle prior administrations’ enforcement efforts, seeing it as detracting from the international credibility of U.S. enforcement.
The FTC is embarking on a novel interventionist path at odds with decades of enforcement policy.
In luncheon remarks, Commissioner Christine S. Wilson lamented the lack of collegiality and consultation within the FTC. She warned that far-reaching rulemakings and other new interventionist initiatives may yield a backlash that undermines the institution.
Following her presentation, a panel of FTC experts discussed several aspects of the commission’s “new interventionism.” According to one panelist, the FTC’s new Section 5 Policy Statement on Unfair Methods of Competition (which ties “unfairness” to arbitrary and subjective terms) “will not survive in” (presumably, will be given no judicial deference by) the courts. Another panelist bemoaned rule-of-law problems arising from FTC actions, called for consistency in FTC and DOJ enforcement policies, and warned that the new merger guidelines will represent a “paradigm shift” that generates more business uncertainty.
The panel expressed doubts about the legal prospects for a proposed FTC rule on noncompete agreements, and noted that constitutional challenges to the agency’s authority may engender additional difficulties for the commission.
The DOJ is greatly expanding its willingness to litigate, and is taking actions that may undermine its credibility in court.
Assistant U.S. Attorney General for Antitrust Jonathan Kanter has signaled a disinclination to settle, as well as an eagerness to litigate large numbers of cases (toward that end, he has hired a huge number of litigators). One panelist noted that, given this posture from the DOJ, there is a risk that judges may come to believe that the department’s litigation decisions are not well-grounded in the law and the facts. The business community may also have a reduced willingness to “buy in” to DOJ guidance.
Panelists also expressed doubts about the wisdom of DOJ bringing more “criminal Sherman Act Section 2” cases. The Sherman Act is a criminal statute, but the “beyond a reasonable doubt” standard of criminal law and Due Process concerns may arise. Panelists also warned that, if new merger guidelines are ”unsound,” they may detract from the DOJ’s credibility in federal court.
International antitrust developments have introduced costly new ex ante competition-regulation and enforcement-coordination problems.
As one panelist explained, the European Union’s implementation of the new Digital Markets Act (DMA) will harmfully undermine market forces. The DMA is a form of ex ante regulation—primarily applicable to large U.S. digital platforms—that will harmfully interject bureaucrats into network planning and design. The DMA will lead to inefficiencies, market fragmentation, and harm to consumers, and will inevitably have spillover effects outside Europe.
Even worse, the DMA will not displace the application of EU antitrust law, but merely add to its burdens. Regrettably, the DMA’s ex ante approach is being imitated by many other enforcement regimes, and the U.S. government tacitly supports it. The DMA has not been included in the U.S.-EU joint competition dialogue, which risks failure. Canada and the U.K. should also be added to the dialogue.
Other International Concerns
The international panelists also noted that there is an unfortunate lack of convergence on antitrust procedures. Furthermore, different jurisdictions manifest substantial inconsistencies in their approaches to multinational merger analysis, where better coordination is needed. There is a special problem in the areas of merger review and of criminal leniency for price fixers: when multiple jurisdictions need to “sign off” on an enforcement matter, the “most restrictive” jurisdiction has an effective veto.
Finally, former Assistant U.S. Attorney General for Antitrust James Rill—perhaps the most influential promoter of the adoption of sound antitrust laws worldwide—closed the international panel with a call for enhanced transnational cooperation. He highlighted the importance of global convergence on sound antitrust procedures, emphasizing due process. He also advocated bolstering International Competition Network (ICN) and OECD Competition Committee convergence initiatives, and explained that greater transparency in agency-enforcement actions is warranted. In that regard, Rill said, ICN nongovernmental advisers should be given a greater role.
Taken as a whole, the forum’s various presentations painted a rather gloomy picture of the short-term prospects for sound, empirically based, economics-centric antitrust enforcement.
In the United States, the enforcement agencies are committed to far more aggressive antitrust enforcement, particularly with respect to mergers. The agencies’ new approach downplays efficiencies and they will be quick to presume broad categories of business conduct are anticompetitive, relying far less closely on case-specific economic analysis.
The outlook is also bad overseas, as European Union enforcers are poised to implement new ex ante regulation of competition by large platforms as an addition to—not a substitute for—established burdensome antitrust enforcement. Most foreign jurisdictions appear to be following the European lead, and the U.S. agencies are doing nothing to discourage them. Indeed, they appear to fully support the European approach.
The consumer welfare standard, which until recently was the stated touchstone of American antitrust enforcement—and was given at least lip service in Europe—has more or less been set aside. The one saving grace in the United States is that the federal courts may put a halt to the agencies’ overweening ambitions, but that will take years. In the meantime, consumer welfare will suffer and welfare-enhancing business conduct will be disincentivized. The EU courts also may place a minor brake on European antitrust expansionism, but that is less certain.
Recall, however, that when evils flew out of Pandora’s box, hope remained. Let us hope, then, that the proverbial worm will turn, and that new leadership—inspired by hopeful and enlightened policy advocates—will restore principled antitrust grounded in the promotion of consumer welfare.