Archives For International

The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.

Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.

In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.

All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.

Why This Time is Probably Not Different

Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets. 

There is already supporting evidence. 

In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.

Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not. 

Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.

The Forgotten Curse of the Incumbent

For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.

It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.

The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.

Why Market Self-Correction Outperforms Regulatory Correction

Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.

Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary. 

But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.

Conclusion

The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.

Federal Trade Commission (FTC) Chair Lina Khan missed the mark once again in her May 6 speech on merger policy, delivered at the annual meeting of the International Competition Network (ICN). At a time when the FTC and U.S. Justice Department (DOJ) are presumably evaluating responses to the agencies’ “request for information” on possible merger-guideline revisions (see here, for example), Khan’s recent remarks suggest a predetermination that merger policy must be “toughened” significantly to disincentivize a larger portion of mergers than under present guidance. A brief discussion of Khan’s substantively flawed remarks follows.

Discussion

Khan’s remarks begin with a favorable reference to the tendentious statement from President Joe Biden’s executive order on competition that “broad government inaction has allowed far too many markets to become uncompetitive, with consolidation and concentration now widespread across our economy, resulting in higher prices, lower wages, declining entrepreneurship, growing inequality, and a less vibrant democracy.” The claim that “government inaction” has enabled increased market concentration and reduced competition has been shown to be  inaccurate, and therefore cannot serve as a defensible justification for a substantive change in antitrust policy. Accordingly, Khan’s statement that the executive order “underscores a deep mandate for change and a commitment to creating the enabling environment for reform” rests on foundations of sand.

Khan then shifts her narrative to a consideration of merger policy, stating:

Merger investigations invite us to make a set of predictive assessments, and for decades we have relied on models that generally assumed markets are self-correcting and that erroneous enforcement is more costly than erroneous non-enforcement. Both the experience of the U.S. antitrust agencies and a growing set of empirical research is showing that these assumptions appear to have been at odds with market realities.

Digital Markets

Khan argues, without explanation, that “the guidelines must better account for certain features of digital markets—including zero-price dynamics, the competitive significance of data, and the network externalities that can swiftly lead markets to tip.” She fails to make any showing that consumer welfare has been harmed by mergers involving digital markets, or that the “zero-price” feature is somehow troublesome. Moreover, the reference to “data” as being particularly significant to antitrust analysis appears to ignore research (see here) indicating there is an insufficient basis for having an antitrust presumption involving big data, and that big data (like R&D) may be associated with innovation, which enhances competitive vibrancy.

Khan also fails to note that network externalities are beneficial; when users are added to a digital platform, the platform’s value to other users increases (see here, for example). What’s more (see here), “gateways and multihoming can dissipate any monopoly power enjoyed by large networks[,] … provid[ing] another reason” why network effects may not raise competitive problems. In addition, the implicit notion that “tipping” is a particular problem is belied by the ability of new competitors to “knock off” supposed entrenched digital monopolists (think, for example, of Yahoo being displaced by Google, and Myspace being displaced by Facebook). Finally, a bit of regulatory humility is in order. Given the huge amount of consumer surplus generated by digital platforms (see here, for example), enforcers should be particularly cautious about avoiding more aggressive merger (and antitrust in general) policies that could detract from, rather than enhance, welfare.

Labor Markets

Khan argues that guidelines drafters should “incorporate new learning” embodied in “empirical research [that] has shown that labor markets are highly concentrated” and a “U.S. Treasury [report] recently estimating that a lack of competition may be costing workers up to 20% of their wages.” Unfortunately for Khan’s argument, these claims have been convincingly debunked (see here) in a new study by former FTC economist Julie Carlson (see here). As Carlson carefully explains, labor markets are not highly concentrated and labor-market power is largely due to market frictions (such as occupational licensing), rather than concentration. In a similar vein, a recent article by Richard Epstein stresses that heightened antitrust enforcement in labor markets would involve “high administrative and compliance costs to deal with a largely nonexistent threat.” Epstein points out:

[T]raditional forms of antitrust analysis can perfectly deal with labor markets. … What is truly needed is a close examination of the other impediments to labor, including the full range of anticompetitive laws dealing with minimum wage, overtime, family leave, anti-discrimination, and the panoply of labor union protections, where the gains to deregulation should be both immediate and large.

Nonhorizontal Mergers

Khan notes:

[W]e are looking to sharpen our insights on non-horizontal mergers, including deals that might be described as ecosystem-driven, concentric, or conglomerate. While the U.S. antitrust agencies energetically grappled with some of these dynamics during the era of industrial-era conglomerates in the 1960s and 70s, we must update that thinking for the current economy. We must examine how a range of strategies and effects, including extension strategies and portfolio effects, may warrant enforcement action.

Khan’s statement on non-horizontal mergers once again is fatally flawed.

With regard to vertical mergers (not specifically mentioned by Khan), the FTC abruptly withdrew, without explanation, its approval of the carefully crafted 2020 vertical-merger guidelines. That action offends the rule of law, creating unwarranted and costly business-sector confusion. Khan’s lack of specific reference to vertical mergers does nothing to solve this problem.

With regard to other nonhorizontal mergers, there is no sound economic basis to oppose mergers involving unrelated products. Threatening to do so would have no procompetitive rationale and would threaten to reduce welfare by preventing the potential realization of efficiencies. In a 2020 OECD paper drafted principally by DOJ and FTC economists, the U.S. government meticulously assessed the case for challenging such mergers and rejected it on economic grounds. The OECD paper is noteworthy in its entirely negative assessment of 1960s and 1970s conglomerate cases which Khan implicitly praises in suggesting they merely should be “updated” to deal with the current economy (citations omitted):

Today, the United States is firmly committed to the core values that antitrust law protect competition, efficiency, and consumer welfare rather than individual competitors. During the ten-year period from 1965 to 1975, however, the Agencies challenged several mergers of unrelated products under theories that were antithetical to those values. The “entrenchment” doctrine, in particular, condemned mergers if they strengthened an already dominant firm through greater efficiencies, or gave the acquired firm access to a broader line of products or greater financial resources, thereby making life harder for smaller rivals. This approach is no longer viewed as valid under U.S. law or economic theory. …

These cases stimulated a critical examination, and ultimate rejection, of the theory by legal and economic scholars and the Agencies. In their Antitrust Law treatise, Phillip Areeda and Donald Turner showed that to condemn conglomerate mergers because they might enable the merged firm to capture cost savings and other efficiencies, thus giving it a competitive advantage over other firms, is contrary to sound antitrust policy, because cost savings are socially desirable. It is now recognized that efficiency and aggressive competition benefit consumers, even if rivals that fail to offer an equally “good deal” suffer loss of sales or market share. Mergers are one means by which firms can improve their ability to compete. It would be illogical, then, to prohibit mergers because they facilitate efficiency or innovation in production. Unless a merger creates or enhances market power or facilitates its exercise through the elimination of competition—in which case it is prohibited under Section 7—it will not harm, and more likely will benefit, consumers.

Given the well-reasoned rejection of conglomerate theories by leading antitrust scholars and modern jurisprudence, it would be highly wasteful for the FTC and DOJ to consider covering purely conglomerate (nonhorizontal and nonvertical) mergers in new guidelines. Absent new legislation, challenges of such mergers could be expected to fail in court. Regrettably, Khan appears oblivious to that reality.

Khan’s speech ends with a hat tip to internationalism and the ICN:

The U.S., of course, is far from alone in seeing the need for a course correction, and in certain regards our reforms may bring us in closer alignment with other jurisdictions. Given that we are here at ICN, it is worth considering how we, as an international community, can or should react to the shifting consensus.

Antitrust laws have been adopted worldwide, in large part at the urging of the United States (see here). They remain, however, national laws. One would hope that the United States, which in the past was the world leader in developing antitrust economics and enforcement policy, would continue to seek to retain this role, rather than merely emulate other jurisdictions to join an “international community” consensus. Regrettably, this does not appear to be the case. (Indeed, European Commissioner for Competition Margrethe Vestager made specific reference to a “coordinated approach” and convergence between U.S. and European antitrust norms in a widely heralded October 2021 speech at the annual Fordham Antitrust Conference in New York. And Vestager specifically touted European ex ante regulation as well as enforcement in a May 5 ICN speech that emphasized multinational antitrust convergence.)

Conclusion

Lina Khan’s recent ICN speech on merger policy sends all the wrong signals on merger guidelines revisions. It strongly hints that new guidelines will embody pre-conceived interventionist notions at odds with sound economics. By calling for a dramatically new direction in merger policy, it interjects uncertainty into merger planning. Due to its interventionist bent, Khan’s remarks, combined with prior statements by U.S. Assistant Attorney General Jonathan Kanter (see here) may further serve to deter potentially welfare-enhancing consolidations. Whether the federal courts will be willing to defer to a drastically different approach to mergers by the agencies (one at odds with several decades of a careful evolutionary approach, rooted in consumer welfare-oriented economics) is, of course, another story. Stay tuned.  

Though details remain scant (and thus, any final judgment would be premature),  initial word on the new Trans-Atlantic Data Privacy Framework agreed to, in principle, by the White House and the European Commission suggests that it could be a workable successor to the Privacy Shield agreement that was invalidated by the Court of Justice of the European Union (CJEU) in 2020.

This new framework agreement marks the third attempt to create a lasting and stable legal regime to permit the transfer of EU citizens’ data to the United States. In the wake of the 2013 revelations by former National Security Agency contractor Edward Snowden about the extent of the United States’ surveillance of foreign nationals, the CJEU struck down (in its 2015 Schrems decision) the then-extant “safe harbor” agreement that had permitted transatlantic data flows. 

In the 2020 Schrems II decision (both cases were brought by Austrian privacy activist Max Schrems), the CJEU similarly invalidated the Privacy Shield, which had served as the safe harbor’s successor agreement. In Schrems II, the court found that U.S. foreign surveillance laws were not strictly proportional to the intelligence community’s needs and that those laws also did not give EU citizens adequate judicial redress.  

This new “Privacy Shield 2.0” agreement, announced during President Joe Biden’s recent trip to Brussels, is intended to address the issues raised in the Schrems II decision. In relevant part, the joint statement from the White House and European Commission asserts that the new framework will: “[s]trengthen the privacy and civil liberties safeguards governing U.S. signals intelligence activities; Establish a new redress mechanism with independent and binding authority; and Enhance its existing rigorous and layered oversight of signals intelligence activities.”

In short, the parties believe that the new framework will ensure that U.S. intelligence gathering is proportional and that there is an effective forum for EU citizens caught up in U.S. intelligence-gathering to vindicate their rights.

As I and my co-authors (my International Center for Law & Economics colleague Mikołaj Barczentewicz and Michael Mandel of the Progressive Policy Institute) detailed in an issue brief last fall, the stakes are huge. While the issue is often framed in terms of social-media use, transatlantic data transfers are implicated in an incredibly large swath of cross-border trade:

According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services. Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

The agreement will be implemented on this side of the Atlantic by a forthcoming executive order from the White House, at which point it will be up to EU courts to determine whether the agreement adequately restricts U.S. intelligence activities and protects EU citizens’ rights. For now, however, it appears at a minimum that the White House took the CJEU’s concerns seriously and made the right kind of concessions to reach agreement.

And now, once the framework is finalized, we just have to sit tight and wait for Mr. Schrems’ next case.

After years of debate and negotiations, European Lawmakers have agreed upon what will most likely be the final iteration of the Digital Markets Act (“DMA”), following the March 24 final round of “trilogue” talks. 

For the uninitiated, the DMA is one in a string of legislative proposals around the globe intended to “rein in” tech companies like Google, Amazon, Facebook, and Apple through mandated interoperability requirements and other regulatory tools, such as bans on self-preferencing. Other important bills from across the pond include the American Innovation and Choice Online Act, the ACCESS Act, and the Open App Markets Act

In many ways, the final version of the DMA represents the worst possible outcome, given the items that were still up for debate. The Commission caved to some of the Parliament’s more excessive demands—such as sweeping interoperability provisions that would extend not only to “ancillary” services, such as payments, but also to messaging services’ basic functionalities. Other important developments include the addition of voice assistants and web browsers to the list of Core Platform Services (“CPS”), and symbolically higher “designation” thresholds that further ensure the act will apply overwhelmingly to just U.S. companies. On a brighter note, lawmakers agreed that companies could rebut their designation as “gatekeepers,” though it remains to be seen how feasible that will be in practice. 

We offer here an overview of the key provisions included in the final version of the DMA and a reminder of the shaky foundations it rests on.

Interoperability

Among the most important of the DMA’s new rules concerns mandatory interoperability among online platforms. In a nutshell, digital platforms that are designated as “gatekeepers” will be forced to make their services “interoperable” (i.e., compatible) with those of rivals. It is argued that this will make online markets more contestable and thus boost consumer choice. But as ICLE scholars have been explaining for some time, this is unlikely to be the case (here, here, and here). Interoperability is not the panacea EU legislators claim it to be. As former ICLE Director of Competition Policy Sam Bowman has written, there are many things that could be interoperable, but aren’t. The reason is that interoperability comes with costs as well as benefits. For instance, it may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to the market and for consumers to be able to choose among them. Economists Michael L. Katz and Carl Shapiro concur:

Although compatibility has obvious benefits, obtaining and maintaining compatibility often involves a sacrifice in terms of product variety or restraints on innovation.

There are other potential downsides to interoperability.  For instance, a given set of interoperable standards might be too costly to implement and/or maintain; it might preclude certain pricing models that increase output; or it might compromise some element of a product or service that offers benefits specifically because it is not interoperable (such as, e.g., security features). Consumers may also genuinely prefer closed (i.e., non-interoperable) platforms. Indeed: “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play that belie simplistic characterizations of one being inherently superior to the other. 

Further, as Sam Bowman observed, narrowing choice through a more curated experience can also be valuable for users, as it frees them from having to research every possible option every time they buy or use some product (if you’re unconvinced, try turning off your spam filter for a couple of days). Instead, the relevant choice consumers exercise might be in choosing among brands. In sum, where interoperability is a desirable feature, consumer preferences will tend to push for more of it. However, it is fundamentally misguided to treat mandatory interoperability as a cure-all elixir or a “super tool” of “digital platform governance.” In a free-market economy, it is not—or, it should not—be up to courts and legislators to substitute for businesses’ product-design decisions and consumers’ revealed preferences with their own, based on diffuse notions of “fairness.” After all, if we could entrust such decisions to regulators, we wouldn’t need markets or competition in the first place.

Of course, it was always clear that the DMA would contemplate some degree of mandatory interoperability – indeed, this was arguably the new law’s biggest selling point. What was up in the air until now was the scope of such obligations. The Commission had initially pushed for a comparatively restrained approach, requiring interoperability “only” in ancillary services, such as payment systems (“vertical interoperability”). By contrast, the European Parliament called for more expansive requirements that would also encompass social-media platforms and other messaging services (“horizontal interoperability”). 

The problem with such far-reaching interoperability requirements is that they are fundamentally out of pace with current privacy and security capabilities. As ICLE Senior Scholar Mikolaj Barczentewicz has repeatedly argued, the Parliament’s insistence on going significantly beyond the original DMA’s proposal and mandating interoperability of messaging services is overly broad and irresponsible. Indeed, as Mikolaj notes, the “likely result is less security and privacy, more expenses, and less innovation.”The DMA’s defensers would retort that the law allows gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament) to protect safety and privacy (it is not yet clear which wording the final version has adopted). Either way, however, the standard may be too high and companies may very well offer lower security to avoid liability for adopting measures that would be judged by the Commission and the courts as going beyond what is “strictly necessary” or “indispensable.” These safeguards will inevitably be all the more indeterminate (and thus ineffectual) if weighed against other vague concepts at the heart of the DMA, such as “fairness.”

Gatekeeper Thresholds and the Designation Process

Another important issue in the DMA’s construction concerns the designation of what the law deems “gatekeepers.” Indeed, the DMA will only apply to such market gatekeepers—so-designated because they meet certain requirements and thresholds. Unfortunately, the factors that the European Commission will consider in conducting this designation process—revenues, market capitalization, and user base—are poor proxies for firms’ actual competitive position. This is not surprising, however, as the procedure is mainly designed to ensure certain high-profile (and overwhelmingly American) platforms are caught by the DMA.

From this perspective, the last-minute increase in revenue and market-capitalization thresholds—from 6.5 billion euros to 7.5 billion euros, and from 65 billion euros to 75 billion euros, respectively—won’t change the scope of the companies covered by the DMA very much. But it will serve to confirm what we already suspected: that the DMA’s thresholds are mostly tailored to catch certain U.S. companies, deliberately leaving out EU and possibly Chinese competitors (see here and here). Indeed, what would have made a difference here would have been lowering the thresholds, but this was never really on the table. Ultimately, tilting the European Union’s playing field against its top trading partner, in terms of exports and trade balance, is economically, politically, and strategically unwise.

As a consolation of sorts, it seems that the Commission managed to squeeze in a rebuttal mechanism for designated gatekeepers. Imposing far-reaching obligations on companies with no  (or very limited) recourse to escape the onerous requirements of the DMA would be contrary to the basic principles of procedural fairness. Still, it remains to be seen how this mechanism will be articulated and whether it will actually be viable in practice.

Double (and Triple?) Jeopardy

Two recent judgments from the European Court of Justice (ECJ)—Nordzucker and bpost—are likely to underscore the unintended effects of cumulative application of both the DMA and EU and/or national competition laws. The bpost decision is particularly relevant, because it lays down the conditions under which cases that evaluate the same persons and the same facts in two separate fields of law (sectoral regulation and competition law) do not violate the principle of ne bis in idem, also known as “double jeopardy.” As paragraph 51 of the judgment establishes:

  1. There must be precise rules to determine which acts or omissions are liable to be subject to duplicate proceedings;
  2. The two sets of proceedings must have been conducted in a sufficiently coordinated manner and within a similar timeframe; and
  3. The overall penalties must match the seriousness of the offense. 

It is doubtful whether the DMA fulfills these conditions. This is especially unfortunate considering the overlapping rules, features, and goals among the DMA and national-level competition laws, which are bound to lead to parallel procedures. In a word: expect double and triple jeopardy to be hotly litigated in the aftermath of the DMA.

Of course, other relevant questions have been settled which, for reasons of scope, we will have to leave for another time. These include the level of fines (up to 10% worldwide revenue, or 20% in the case of repeat offenses); the definition and consequences of systemic noncompliance (it seems that the Parliament’s draconian push for a general ban on acquisitions in case of systemic noncompliance has been dropped); and the addition of more core platform services (web browsers and voice assistants).

The DMA’s Dubious Underlying Assumptions

The fuss and exhilaration surrounding the impending adoption of the EU’s most ambitious competition-related proposal in decades should not obscure some of the more dubious assumptions which underpin it, such as that:

  1. It is still unclear that intervention in digital markets is necessary, let alone urgent.
  2. Even if it were clear, there is scant evidence to suggest that tried and tested ex post instruments, such as those envisioned in EU competition law, are not up to the task.
  3. Even if the prior two points had been established beyond any reasonable doubt (which they haven’t), it is still far from clear that DMA-style ex ante regulation is the right tool to address potential harms to competition and to consumers that arise in digital markets.

It is unclear that intervention is necessary

Despite a mounting moral panic around and zealous political crusading against Big Tech (an epithet meant to conjure antipathy and distrust), it is still unclear that intervention in digital markets is necessary. Much of the behavior the DMA assumes to be anti-competitive has plausible pro-competitive justifications. Self-preferencing, for instance, is a normal part of how platforms operate, both to improve the value of their core products and to earn returns to reinvest in their development. As ICLE’s Dirk Auer points out, since platforms’ incentives are to maximize the value of their entire product ecosystem, those that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product (the example of Facebook’s integration of Instagram is a case in point). Thus, while self-preferencing may, in some cases, be harmful, a blanket presumption of harm is thoroughly unwarranted

Similarly, the argument that switching costs and data-related increasing returns to scale (in fact, data generally entails diminishing returns) have led to consumer lock-in and thereby raised entry barriers has also been exaggerated to epic proportions (pun intended). As we have discussed previously, there are plenty of counterexamples where firms have easily overcome seemingly “insurmountable” barriers to entry, switching costs, and network effects to disrupt incumbents. 

To pick a recent case: how many of us had heard of Zoom before the pandemic? Where was TikTok three years ago? (see here for a multitude of other classic examples, including Yahoo and Myspace).

Can you really say, with a straight face, that switching costs between messaging apps are prohibitive? I’m not even that active and I use at least six such apps on a daily basis: Facebook Messenger, Whatsapp, Instagram, Twitter, Viber, Telegram, and Slack (it took me all of three minutes to download and start using Slack—my newest addition). In fact, chances are that, like me, you have always multihomed nonchalantly and had never even considered that switching costs were impossibly high (or that they were a thing) until the idea that you were “locked-in” by Big Tech was drilled into your head by politicians and other busybodies looking for trophies to adorn their walls.

What about the “unprecedented,” quasi-fascistic levels of economic concentration? First, measures of market concentration are sometimes anchored in flawed methodology and market definitions  (see, e.g., Epic’s insistence that Apple is a monopolist in the market for operating systems, conveniently ignoring that competition occurs at the smartphone level, where Apple has a worldwide market share of 15%—see pages 45-46 here). But even if such measurements were accurate, high levels of concentration don’t necessarily mean that firms do not face strong competition. In fact, as Nicolas Petit has shown, tech companies compete vigorously against each other across markets.

But perhaps the DMA’s raison d’etre rests less on market failure, but rather on a legal or enforcement failure? This, too, is misguided.

EU competition law is already up to the task

As Giuseppe Colangelo has argued persuasively (here and here), it is not at all clear that ex post competition regulation is insufficient to tackle anti-competitive behavior in the digital sector:

Ongoing antitrust investigations demonstrate that standard competition law still provides a flexible framework to scrutinize several practices described as new and peculiar to app stores. 

The recent Google Shopping decision, in which the Commission found that Google had abused its dominant position by preferencing its own online-shopping service in Google Search results, is a case in point (the decision was confirmed by the General Court and is now pending review before the European Court of Justice). The “self-preferencing” category has since been applied by other EU competition authorities. The Italian competition authority, for instance, fined Amazon 1 billion euros for preferencing its own distribution service, Fulfilled by Amazon, on the Amazon marketplace (i.e., Amazon.it). Thus, Article 102, which includes prohibitions on “applying dissimilar conditions to similar transactions,” appears sufficiently flexible to cover self-preferencing, as well as other potentially anti-competitive offenses relevant to digital markets (e.g., essential facilities).

For better or for worse, EU competition law has historically been sufficiently pliable to serve a range of goals and values. It has also allowed for experimentation and incorporated novel theories of harm and economic insights. Here, the advantage of competition law is that it allows for a more refined, individualized approach that can avoid some of the pitfalls of applying a one-size fits all model across all digital platforms. Those pitfalls include: harming consumers, jeopardizing the business models of some of the most successful and pro-consumer companies in existence, and ignoring the differences among platforms, such as between Google and Apple’s app stores. I turn to these issues next.

Ex ante regulation probably isn’t the right tool

Even if it were clear that intervention is necessary and that existing competition law was insufficient, it is not clear that the DMA is the right regulatory tool to address any potential harms to competition and consumers that may arise in the digital markets. Here, legislators need to be wary of unintended consequences, trade-offs, and regulatory fallibility. For one, It is possible that the DMA will essentially consolidate the power of tech platforms, turning them into de facto public utilities. This will not foster competition, but rather will make smaller competitors systematically dependent on so-called gatekeepers. Indeed, why become the next Google if you can just free ride off of the current Google? Why download an emerging messaging app if you can already interact with its users through your current one? In a way, then, the DMA may become a self-fulfilling prophecy. 

Moreover, turning closed or semi-closed platforms such as the iOS into open platforms more akin to Android blurs the distinctions among products and dampens interbrand competition. It is a supreme paradox that interoperability and sideloading requirements purportedly give users more choice by taking away the option of choosing a “walled garden” model. As discussed above, overriding the revealed preferences of millions of users is neither pro-competitive nor pro-consumer (but it probably favors some competitors at the expense of those two things). 

Nor are many of the other obligations contemplated in the DMA necessarily beneficial to consumers. Do users really not want to have default apps come preloaded on their devices and instead have to download and install them manually? Ditto for operating systems. What is the point of an operating system if it doesn’t come with certain functionalities, such as a web browser? What else should we unbundle—keyboard on iOS? Flashlight? Do consumers really want to choose from dozens of app stores when turning on their new phone for the first time? Do they really want to have their devices cluttered with pointless split-screens? Do users really want to find all their contacts (and be found by all their contacts) across all messaging services? (I switched to Viber because I emphatically didn’t.) Do they really want to have their privacy and security compromised because of interoperability requirements?Then there is the question of regulatory fallibility. As Alden Abott has written on the DMA and other ex ante regulatory proposals aimed at “reining in” tech companies:

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). 

This brings us back to the second point: without evidence that antitrust law is “not up to the task,” far-reaching and untested regulatory initiatives with potentially high error costs are put forth as superior to long-established, consumer-based antitrust enforcement. Yes, antitrust may have downsides (e.g., relative indeterminacy and slowness), but these pale in comparison to the DMA’s (e.g., large error costs resulting from high information requirements, rent-seeking, agency capture).

Conclusion

The DMA is an ambitious piece of regulation purportedly aimed at ensuring “fair and open digital markets.” This implies that markets are not fair and open; or that they risk becoming unfair and closed absent far-reaching regulatory intervention at EU level. However, it is unclear to what extent such assumptions are borne out by the reality of markets. Are digital markets really closed? Are they really unfair? If so, is it really certain that regulation is necessary? Has antitrust truly proven insufficient? It also implies that DMA-style ex ante regulation is necessary to tackle it, and that the costs won’t outweigh the benefits. These are heroic assumptions that have never truly been seriously put to the test. 

Considering such brittle empirical foundations, the DMA was always going to be a contentious piece of legislation. However, there was always the hope that EU legislators would show restraint in the face of little empirical evidence and high error costs. Today, these hopes have been dashed. With the adoption of the DMA, the Commission, Council, and the Parliament have arguably taken a bad piece of legislation and made it worse. The interoperability requirements in messaging services, which are bound to be a bane for user privacy and security, are a case in point.

After years trying to anticipate the whims of EU legislators, we finally know where we’re going, but it’s still not entirely sure why we’re going there.

As the European Union’s Digital Markets Act (DMA) has entered the final stage of its approval process, one matter the inter-institutional negotiations appears likely to leave unresolved is how the DMA’s the relationship with competition law affects the very rationale and legal basis for the intervention. 

The DMA is explicitly grounded on the questionable assumption that competition law alone is insufficient to rein in digital gatekeepers. Accordingly, EU lawmakers have declared the proposal to be a necessary regulatory intervention that will complement antitrust rules by introducing a set of ex ante obligations.

To support this line of reasoning, the DMA’s drafters insist that it protects a different legal interest from antitrust. Indeed, the intervention is ostensibly grounded in Article 114 of the Treaty on the Functioning of the European Union (TFEU), rather than Article 103—the article that spells out the implementation of competition law. Pursuant to Article 114, the DMA opts for centralized enforcement at the EU level to ensure harmonized implementation of the new rules.

It has nonetheless been clear from the very beginning that the DMA lacks a distinct purpose. Indeed, the interests it nominally protects (the promotion of fairness and contestability) do not differ from the substance and scope of competition law. The European Parliament has even suggested that the law’s aims should also include fostering innovation and increasing consumer welfare, which also are within the purview of competition law. Moreover, the DMA’s obligations focus on practices that have already been the subject of past and ongoing antitrust investigations.

Where the DMA differs in substance from competition law is simply that it would free enforcers from the burden of standard antitrust analysis. The law is essentially a convenient shortcut that would dispense with the need to define relevant markets, prove dominance, and measure market effects (see here). It essentially dismisses economic analysis and the efficiency-oriented consumer welfare test in order to lower the legal standards and evidentiary burdens needed to bring an investigation.

Acknowledging the continuum between competition law and the DMA, the European Competition Network and some member states (self-appointed as “friends of an effective DMA”) have proposed empowering national competition authorities (NCAs) to enforce DMA obligations.

Against this background, my new ICLE working paper pursues a twofold goal. First, it aims to show how, because of its ambiguous relationship with competition law, the DMA falls short of its goal of preventing regulatory fragmentation. Moreover, despite having significant doubts about the DMA’s content and rationale, I argue that fully centralized enforcement at the EU level should be preserved and that frictions with competition law would be better confined by limiting the law’s application to a few large platforms that are demonstrably able to orchestrate an ecosystem.

Welcome to the (Regulatory) Jungle

The DMA will not replace competition rules. It will instead be implemented alongside them, creating several overlapping layers of regulation. Indeed, my paper broadly illustrates how the very same practices that are targeted by the DMA may also be investigated by NCAs under European and national-level competition laws, under national competition laws specific to digital markets, and under national rules on economic dependence.

While the DMA nominally prohibits EU member states from imposing additional obligations on gatekeepers, member states remain free to adapt their competition laws to digital markets in accordance with the leeway granted by Article 3(3) of the Modernization Regulation. Moreover, NCAs may be eager to exploit national rules on economic dependence to tackle perceived imbalances of bargaining power between online platforms and their business counterparties.

The risk of overlap with competition law is also fostered by the DMA’s designation process, which may further widen the law’s scope in the future in terms of what sorts of digital services and firms may fall under the law’s rubric. As more and more industries explore platform business models, the DMA would—without some further constraints on its scope—be expected to cover a growing number of firms, including those well outside Big Tech or even native tech companies.

As a result, the European regulatory landscape could become even more fragmented in the post-DMA world. The parallel application of the DMA and antitrust rules poses the risks of double jeopardy (see here) and of conflicting decisions.

A Fully Centralized and Ecosystem-Based Regulatory Regime

To counter the risk that digital-market activity will be subject to regulatory double jeopardy and conflicting decisions across EU jurisdictions, DMA enforcement should not only be fully centralized at the EU level, but that centralization should be strengthened. This could be accomplished by empowering the Commission with veto rights, as was requested by the European Parliament.

This veto power should certainly extend to national measures targeting gatekeepers that run counter to the DMA or to decisions adopted by the Commission under the DMA. But it should also include prohibiting national authorities from carrying out investigations on their own initiative without prior authorization by the Commission.

Moreover, it will also likely be necessary to significantly redefine the DMA’s scope. Notably, EU leaders could mitigate the risk of fragmentation from the DMA’s frictions with competition law by circumscribing the law to ecosystem-related issues. This would effectively limit its application to a few large platforms who are demonstrably able to orchestrate an ecosystem. It also would reinstate the DMA’s original justification, which was to address the emergence of a few large platforms who are able act as gatekeepers and enjoy an entrenched position as a result of conglomerate ecosystems.

Changes to the designation process should also be accompanied by confining the list of ex ante obligations the law imposes. These should reflect relevant differences in platforms’ business models and be tailored to the specific firm under scrutiny, rather than implementing a one-size-fits-all approach.

There are compelling arguments against the policy choice to regulate platforms and their ecosystems like utilities. The suggested adaptations would at least acknowledge the regulatory nature of the DMA, removing the suspicion that it is just an antitrust intervention vested by regulation.

U.S. antitrust policy seeks to promote vigorous marketplace competition in order to enhance consumer welfare. For more than four decades, mainstream antitrust enforcers have taken their cue from the U.S. Supreme Court’s statement in Reiter v. Sonotone (1979) that antitrust is “a consumer welfare prescription.” Recent suggestions (see here and here) by new Biden administration Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) leadership that antitrust should promote goals apart from consumer welfare have yet to be embodied in actual agency actions, and they have not been tested by the courts. (Given Supreme Court case law, judicial abandonment of the consumer welfare standard appears unlikely, unless new legislation that displaces it is enacted.)   

Assuming that the consumer welfare paradigm retains its primacy in U.S. antitrust, how do the goals of antitrust match up with those of national security? Consistent with federal government pronouncements, the “basic objective of U.S. national security policy is to preserve and enhance the security of the United States and its fundamental values and institutions.” Properly applied, antitrust can retain its consumer welfare focus in a manner consistent with national security interests. Indeed, sound antitrust and national-security policies generally go hand-in-hand. The FTC and the DOJ should keep that in mind in formulating their antitrust policies (spoiler alert: they sometimes have failed to do so).

Discussion

At first blush, it would seem odd that enlightened consumer-welfare-oriented antitrust enforcement and national-security policy would be in tension. After all, enlightened antitrust enforcement is concerned with targeting transactions that harmfully reduce output and undermine innovation, such as hard-core collusion and courses of conduct that inefficiently exclude competition and weaken marketplace competition. U.S. national security would seem to be promoted (or, at least, not harmed) by antitrust enforcement directed at supporting stronger, more vibrant American markets.

This initial instinct is correct, if antitrust-enforcement policy indeed reflects economically sound, consumer-welfare-centric principles. But are there examples where antitrust enforcement falls short and thereby is at odds with national security? An evaluation of three areas of interaction between the two American policy interests is instructive.

The degree of congruence between national security and appropriate consumer welfare-enhancing antitrust enforcement is illustrated by a brief discussion of:

  1. defense-industry mergers;
  2. the intellectual property-antitrust interface, with a focus on patent licensing; and
  3. proposed federal antitrust legislation.

The first topic presents an example of clear consistency between consumer-welfare-centric antitrust and national defense. In contrast, the second topic demonstrates that antitrust prosecutions (and policies) that inappropriately weaken intellectual-property protections are inconsistent with national defense interests. The second topic does not manifest a tension between antitrust and national security; rather, it illustrates a tension between national security and unsound antitrust enforcement. In a related vein, the third topic demonstrates how a change in the antitrust statutes that would undermine the consumer welfare paradigm would also threaten U.S. national security.

Defense-Industry Mergers

The consistency between antitrust goals and national security is relatively strong and straightforward in the field of defense-industry-related mergers and joint ventures. The FTC and DOJ traditionally have worked closely with the U.S. Defense Department (DOD) to promote competition and consumer welfare in evaluating business transactions that affect national defense needs.

The DOD has long supported policies to prevent overreliance on a single supplier for critical industrial-defense needs. Such a posture is consistent with the antitrust goal of preventing mergers to monopoly that reduce competition, raise prices, and diminish quality by creating or entrenching a dominant firm. As then-FTC Commissioner William Kovacic commented about an FTC settlement that permitted the United Launch Alliance (an American spacecraft launch service provider established in 2006 as a joint venture between Lockheed Martin and Boeing), “[i]n reviewing defense industry mergers, competition authorities and the DOD generally should apply a presumption that favors the maintenance of at least two suppliers for every weapon system or subsystem.”

Antitrust enforcers have, however, worked with DOD to allow the only two remaining suppliers of a defense-related product or service to combine their operations, subject to appropriate safeguards, when presented with scale economy and quality rationales that advanced national-security interests (see here).

Antitrust enforcers have also consulted and found common cause with DOD in opposing anticompetitive mergers that have national-security overtones. For example, antitrust enforcement actions targeting vertical defense-sector mergers that threaten anticompetitive input foreclosure or facilitate anticompetitive information exchanges are in line with the national-security goal of preserving vibrant markets that offer the federal government competitive, high-quality, innovative, and reasonably priced purchase options for its defense needs.

The FTC’s recent success in convincing Lockheed Martin to drop its proposed acquisition of Aerojet Rocketdyne holdings fits into this category. (I express no view on the merits of this matter; I merely cite it as an example of FTC-DOD cooperation in considering a merger challenge.) In its February 2022 press release announcing the abandonment of this merger, the FTC stated that “[t]he acquisition would have eliminated the country’s last independent supplier of key missile propulsion inputs and given Lockheed the ability to cut off its competitors’ access to these critical components.” The FTC also emphasized the full consistency between its enforcement action and national-security interests:

Simply put, the deal would have resulted in higher prices and diminished quality and innovation for programs that are critical to national security. The FTC’s enforcement action in this matter dovetails with the DoD report released this week recommending stronger merger oversight of the highly concentrated defense industrial base.

Intellectual-Property Licensing

Shifts in government IP-antitrust patent-licensing policy perspectives

Intellectual-property (IP) licensing, particularly involving patents, is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumers’ and producers’ surplus). See generally, for example, Daniel Spulber’s The Case for Patents and Jonathan Barnett’s Innovation, Firms, and Markets. Patents are a property right, and they do not necessarily convey market power, as the federal government has recognized (see 2017 DOJ-FTC Antitrust Guidelines for the Licensing of Intellectual Property).

Standard setting through standard setting organizations (SSOs) has been a particularly important means of spawning valuable benchmarks (standards) that have enabled new patent-backed technologies to drive innovation and enable mass distribution of new high-tech products, such as smartphones. The licensing of patents that cover and make possible valuable standards—“standard-essential patents” or SEPs—has played a crucial role in bringing to market these products and encouraging follow-on innovations that have driven fast-paced welfare-enhancing product and process quality improvements.

The DOJ and FTC have recognized specific efficiency benefits of IP licensing in their 2017 Antitrust Guidelines for the Licensing of Intellectual Property, stating (citations deleted):

Licensing, cross-licensing, or otherwise transferring intellectual property (hereinafter “licensing”) can facilitate integration of the licensed property with complementary factors of production. This integration can lead to more efficient exploitation of the intellectual property, benefiting consumers through the reduction of costs and the introduction of new products. Such arrangements increase the value of intellectual property to consumers and owners. Licensing can allow an innovator to capture returns from its investment in making and developing an invention through royalty payments from those that practice its invention, thus providing an incentive to invest in innovative efforts. …

[L]imitations on intellectual property licenses may serve procompetitive ends by allowing the licensor to exploit its property as efficiently and effectively as possible. These various forms of exclusivity can be used to give a licensee an incentive to invest in the commercialization and distribution of products embodying the licensed intellectual property and to develop additional applications for the licensed property. The restrictions may do so, for example, by protecting the licensee against free riding on the licensee’s investments by other licensees or by the licensor. They may also increase the licensor’s incentive to license, for example, by protecting the licensor from competition in the licensor’s own technology in a market niche that it prefers to keep to itself.

Unfortunately, however, FTC and DOJ antitrust policies over the last 15 years have too often belied this generally favorable view of licensing practices with respect to SEPs. (See generally here, here, and here). Notably, the antitrust agencies have at various times taken policy postures and enforcement actions indicating that SEP holders may face antitrust challenges if:

  1. they fail to license all comers, including competitors, on fair, reasonable, and nondiscriminatory (FRAND) terms; and
  2. seek to obtain injunctions against infringers.

In addition, antitrust policy officials (see 2011 FTC Report) have described FRAND price terms as cabined by the difference between the licensing rates for the first (included in the standard) and second (not included in the standard) best competing patented technologies available prior to the adoption of a standard. This pricing measure—based on the “incremental difference” between first and second-best technologies—has been described as necessary to prevent SEP holders from deriving artificial “monopoly rents” that reflect the market power conferred by a standard. (But see then FTC-Commissioner Joshua Wright’s 2013 essay to the contrary, based on the economics of incomplete contracts.)

This approach to SEPs undervalues them, harming the economy. Limitations on seeking injunctions (which are a classic property-right remedy) encourages opportunistic patent infringements and artificially disfavors SEP holders in bargaining over licensing terms with technology implementers, thereby reducing the value of SEPs. SEP holders are further disadvantaged by the presumption that they must license all comers. They also are harmed by the implication that they must be limited to a relatively low hypothetical “ex ante” licensing rate—a rate that totally fails to take into account the substantial economic welfare value that will accrue to the economy due to their contribution to the standard. Considered individually and as a whole, these negative factors discourage innovators from participating in standardization, to the detriment of standards quality. Lower-quality standards translate into inferior standardized produces and processes and reduced innovation.

Recognizing this problem, in 2018 DOJ, Assistant Attorney General for Antitrust Makan Delrahim announced a “New Madison Approach” (NMA) to SEP licensing, which recognized:

  1. antitrust remedies are inappropriate for patent-licensing disputes between SEP-holders and implementers of a standard;
  2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders;
  3. SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents by seeking injunctions; and
  4. unilateral and unconditional decisions not to license a patent should be per se legal. (See, for example, here and here.)

Acceptance of the NMA would have counter-acted the economically harmful degradation of SEPs stemming from prior government policies.

Regrettably, antitrust-enforcement-agency statements during the last year effectively have rejected the NMA. Most recently, in December 2021, the DOJ issued for public comment a Draft Policy Statement on Licensing Negotiations and Remedies, SEPs, which displaces a 2019 statement that had been in line with the NMA. Unless the FTC and Biden DOJ rethink their new position and decide instead to support the NMA, the anti-innovation approach to SEPs will once again prevail, with unfortunate consequences for American innovation.

The “weaker patents” implications of the draft policy statement would also prove detrimental to national security, as explained in a comment on the statement by a group of leading law, economics, and business scholars (including Nobel Laureate Vernon Smith) convened by the International Center for Law & Economics:

China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights. …

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

A Center for Security and International Studies submission on the draft policy statement (signed by a former deputy secretary of the DOD, as well as former directors of the U.S. Patent and Trademark Office and the National Institute of Standards and Technology) also raised China-related national-security concerns:

[T]he largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

Furthermore, in a more general vein, leading innovation economist David Teece also noted the negative national-security implications in his submission on the draft policy statement:

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation. … Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

That’s not all. In its public comment warning against precipitous finalization of the draft policy statement, the Innovation Alliance noted that, in recent years, major foreign jurisdictions have rejected the notion that SEP holders should be deprived the opportunity to seek injunctions. The Innovation Alliance opined in detail on the China national-security issues (footnotes omitted):

[T]he proposed shift in policy will undermine the confidence and clarity necessary to incentivize investments in important and risky research and development while simultaneously giving foreign competitors who do not rely on patents to drive investment in key technologies, like China, a distinct advantage. …

The draft policy statement … would devalue SEPs, and undermine the ability of U.S. firms to invest in the research and development needed to maintain global leadership in 5G and other critical technologies.

Without robust American investments, China—which has clear aspirations to control and lead in critical standards and technologies that are essential to our national security—will be left without any competition. Since 2015, President Xi has declared “whoever controls the standards controls the world.” China has rolled out the “China Standards 2035” plan and has outspent the United States by approximately $24 billion in wireless communications infrastructure, while China’s five-year economic plan calls for $400 billion in 5G-related investment.

Simply put, the draft policy statement will give an edge to China in the standards race because, without injunctions, American companies will lose the incentive to invest in the research and development needed to lead in standards setting. Chinese companies, on the other hand, will continue to race forward, funded primarily not by license fees, but by the focused investment of the Chinese government. …

Public hearings are necessary to take into full account the uncertainty of issuing yet another policy on this subject in such a short time period.

A key part of those hearings and further discussions must be the national security implications of a further shift in patent enforceability policy. Our future safety depends on continued U.S. leadership in areas like 5G and artificial intelligence. Policies that undermine the enforceability of patent rights disincentivize the substantial private sector investment necessary for research and development in these areas. Without that investment, development of these key technologies will begin elsewhere—likely China. Before any policy is accepted, key national-security stakeholders in the U.S. government should be asked for their official input.

These are not the only comments that raised the negative national-security ramifications of the draft policy statement (see here and here). For example, current Republican and Democratic senators, prior International Trade Commissioners, and former top DOJ and FTC officials also noted concerns. What’s more, the Patent Protection Society of China, which represents leading Chinese corporate implementers, filed a rather nonanalytic submission in favor of the draft statement. As one leading patent-licensing lawyer explains: “UC Berkley Law Professor Mark Cohen, whose distinguished government service includes serving as the USPTO representative in China, submitted a thoughtful comment explaining how the draft Policy Statement plays into China’s industrial and strategic interests.”

Finally, by weakening patent protection, the draft policy statement is at odds with  the 2021 National Security Commission on Artificial Intelligence Report, which called for the United States to “[d]evelop and implement national IP policies to incentivize, expand, and protect emerging technologies[,]” in response to Chinese “leveraging and exploiting intellectual property (IP) policies as a critical tool within its national strategies for emerging technologies.”

In sum, adoption of the draft policy statement would raise antitrust risks, weaken key property rights protections for SEPs, and undercut U.S. technological innovation efforts vis-à-vis China, thereby undermining U.S. national security.

FTC v. Qualcomm: Misguided enforcement and national security

U.S. national-security interests have been threatened by more than just the recent SEP policy pronouncements. In filing a January 2017 antitrust suit (at the very end of the Obama administration) against Qualcomm’s patent-licensing practices, the FTC (by a partisan 2-1 vote) ignored the economic efficiencies that underpinned this highly successful American technology company’s practices. Had the suit succeeded, U.S. innovation in a critically important technology area would have needlessly suffered, with China as a major beneficiary. A recent Federalist Society Regulatory Transparency Project report on the New Madison Approach underscored the broad policy implications of FTC V. Qualcomm (citations deleted):

The FTC’s Qualcomm complaint reflected the anti-SEP bias present during the Obama administration. If it had been successful, the FTC’s prosecution would have seriously undermined the freedom of the company to engage in efficient licensing of its SEPs.

Qualcomm is perhaps the world’s leading wireless technology innovator. It has developed, patented, and licensed key technologies that power smartphones and other wireless devices, and continues to do so. Many of Qualcomm’s key patents are SEPs subject to FRAND, directed to communications standards adopted by wireless devices makers. Qualcomm also makes computer processors and chips embodied in cutting edge wireless devices. Thanks in large part to Qualcomm technology, those devices have improved dramatically over the last decade, offering consumers a vast array of new services at a lower and lower price, when quality is factored in. Qualcomm thus is the epitome of a high tech American success story that has greatly benefited consumers.

Qualcomm: (1) sells its chips to “downstream” original equipment manufacturers (OEMs, such as Samsung and Apple), on the condition that the OEMs obtain licenses to Qualcomm SEPs; and (2) refuses to license its FRAND-encumbered SEPs to rival chip makers, while allowing those rivals to create and sell chips embodying Qualcomm SEP technologies to those OEMS that have entered a licensing agreement with Qualcomm.

The FTC’s 2017 antitrust complaint, filed in federal district court in San Francisco, charged that Qualcomm’s “no license, no chips” policy allegedly “forced” OEM cell phone manufacturers to pay elevated royalties on products that use a competitor’s baseband processors. The FTC deemed this an illegal “anticompetitive tax” on the use of rivals’ processors, since phone manufacturers “could not run the risk” of declining licenses and thus losing all access to Qualcomm’s processors (which would be needed to sell phones on important cellular networks). The FTC also argued that Qualcomm’s refusal to license its rivals despite its SEP FRAND commitment violated the antitrust laws. Finally, the FTC asserted that a 2011-2016 Qualcomm exclusive dealing contract with Apple (in exchange for reduced patent royalties) had excluded business opportunities for Qualcomm competitors.

The federal district court held for the FTC. It ordered that Qualcomm end these supposedly anticompetitive practices and renegotiate its many contracts. [Among the beneficiaries of new pro-implementer contract terms would have been a leading Chinese licensee of Qualcomm’s, Huawei, the huge Chinese telecommunications company that has been accused by the U.S. government of using technological “back doors” to spy on the United States.]

Qualcomm appealed, and in August 2020 a panel of the Ninth Circuit Court of Appeals reversed the district court, holding for Qualcomm. Some of the key points underlying this holding were: (1) Qualcomm had no antitrust duty to deal with competitors, consistent with established Supreme Court precedent (a very narrow exception to this precedent did not apply); (2) Qualcomm’s rates were chip supplier neutral because all OEMs paid royalties, not just rivals’ customers; (3) the lower court failed to show how the “no license, no chips” policy harmed Qualcomm’s competitors; and (4) Qualcomm’s agreements with Apple did not have the effect of substantially foreclosing the market to competitors. The Ninth Circuit as a whole rejected the FTC’s “en banc” appeal for review of the panel decision.

The appellate decision in Qualcomm largely supports pillar four of the NMA, that unilateral and unconditional decisions not to license a patent should be deemed legal under the antitrust laws. More generally, the decision evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support. The FTC and the lower court’s findings of “harm” had been essentially speculative and anecdotal at best. They had ignored the “big picture” that the markets in which Qualcomm operates had seen vigorous competition and the conferral of enormous and growing welfare benefits on consumers, year-by-year. The lower court and the FTC had also turned a deaf ear to a legitimate efficiency-related business rationale that explained Qualcomm’s “no license, no chips” policy – a fully justifiable desire to obtain a fair return on Qualcomm’s patented technology.

Qualcomm is well reasoned, and in line with sound modern antitrust precedent, but it is only one holding. The extent to which this case’s reasoning proves influential in other courts may in part depend on the policies advanced by DOJ and the FTC going forward. Thus, a preliminary examination of the Biden administration’s emerging patent-antitrust policy is warranted. [Subsequent discussion shows that the Biden administration apparently has rejected pro-consumer policies embodied in the 9th U.S. Circuit’s Qualcomm decision and in the NMA.]

Although the 9th Circuit did not comment on them, national-security-policy concerns weighed powerfully against the FTC v. Qualcomm suit. In a July 2019 Statement of Interest (SOI) filed with the circuit court, DOJ cogently set forth the antitrust flaws in the district court’s decision favoring the FTC. Furthermore, the SOI also explained that “the public interest” favored a stay of the district court holding, due to national-security concerns (described in some detail in statements by the departments of Defense and Energy, appended to the SOI):

[T]he public interest also takes account of national security concerns. Winter v. NRDC, 555 U.S. 7, 23-24 (2008). This case presents such concerns. In the view of the Executive Branch, diminishment of Qualcomm’s competitiveness in 5G innovation and standard-setting would significantly impact U.S. national security. A251-54 (CFIUS); LD ¶¶10-16 (Department of Defense); ED ¶¶9-10 (Department of Energy). Qualcomm is a trusted supplier of mission-critical products and services to the Department of Defense and the Department of Energy. LD ¶¶5-8; ED ¶¶8-9. Accordingly, the Department of Defense “is seriously concerned that any detrimental impact on Qualcomm’s position as global leader would adversely affect its ability to support national security.” LD ¶16.

The [district] court’s remedy [requiring the renegotiation of Qualcomm’s licensing contracts] is intended to deprive, and risks depriving, Qualcomm of substantial licensing revenue that could otherwise fund time-sensitive R&D and that Qualcomm cannot recover later if it prevails. See, e.g., Op. 227-28. To be sure, if Qualcomm ultimately prevails, vacatur of the injunction will limit the severity of Qualcomm’s revenue loss and the consequent impairment of its ability to perform functions critical to national security. The Department of Defense “firmly believes,” however, “that any measure that inappropriately limits Qualcomm’s technological leadership, ability to invest in [R&D], and market competitiveness, even in the short term, could harm national security. The risks to national security include the disruption of [the Department’s] supply chain and unsure U.S. leadership in 5G.” LD ¶3. Consequently, the public interest necessitates a stay pending this Court’s resolution of the merits. In these rare circumstances, the interest in preventing even a risk to national security—“an urgent objective of the highest order”—presents reason enough not to enforce the remedy immediately. Int’l Refugee Assistance Project, 137 S. Ct. at 2088 (internal quotations omitted).

Not all national-security arguments against antitrust enforcement may be well-grounded, of course. The key point is that the interests of national security and consumer-welfare-centric antitrust are fully aligned when antitrust suits would inefficiently undermine the competitive vigor of a firm or firms that play a major role in supporting U.S. national-security interests. Such was the case in FTC v. Qualcomm. More generally, heightened antitrust scrutiny of efficient patent-licensing practices (as threatened by the Biden administration) would tend to diminish innovation by U.S. patentees, particularly in areas covered by standards that are key to leading global technologies. Such a diminution in innovation will tend to weaken American advantages in important industry sectors that are vital to U.S. national-security interests.

Proposed Federal Antitrust Legislation

Proposed federal antitrust legislation being considered by Congress (see here, here, and here for informed critiques) would prescriptively restrict certain large technology companies’ business transactions. If enacted, such legislation would thereby preclude case-specific analysis of potential transaction-specific efficiencies, thereby undermining the consumer welfare standard at the heart of current sound and principled antitrust enforcement. The legislation would also be at odds with our national-security interests, as a recent U.S. Chamber of Commerce paper explains:

Congress is considering new antitrust legislation which, perversely, would weaken leading U.S. technology companies by crafting special purpose regulations under the guise of antitrust to prohibit those firms from engaging in business conduct that is widely acceptable when engaged in by rival competitors.

A series of legislative proposals – some of which already have been approved by relevant Congressional committees – would, among other things: dismantle these companies; prohibit them from engaging in significant new acquisitions or investments; require them to disclose sensitive user data and sensitive IP and trade secrets to competitors, including those that are foreign-owned and controlled; facilitate foreign influence in the United States; and compromise cybersecurity.  These bills would fundamentally undermine American security interests while exempting from scrutiny Chinese and other foreign firms that do not meet arbitrary user and market capitalization thresholds specified in the legislation. …

The United States has never used legislation to punish success. In many industries, scale is important and has resulted in significant gains for the American economy, including small businesses.  U.S. competition law promotes the interests of consumers, not competitors. It should not be used to pick winners and losers in the market or to manage competitive outcomes to benefit select competitors.  Aggressive competition benefits consumers and society, for example by pushing down prices, disrupting existing business models, and introducing innovative products and services.

If enacted, the legislative proposals would drag the United States down in an unfolding global technological competition.  Companies captured by the legislation would be required to compete against integrated foreign rivals with one hand tied behind their backs.  Those firms that are the strongest drivers of U.S. innovation in AI, quantum computing, and other strategic technologies would be hamstrung or even broken apart, while foreign and state-backed producers of these same technologies would remain unscathed and seize the opportunity to increase market share, both in the U.S. and globally. …

Instead of warping antitrust law to punish a discrete group of American companies, the U.S. government should focus instead on vigorous enforcement of current law and on vocally opposing and effectively countering foreign regimes that deploy competition law and other legal and regulatory methods as industrial policy tools to unfairly target U.S. companies.  The U.S. should avoid self-inflicted wounds to our competitiveness and national security that would result from turning antitrust into a weapon against dynamic and successful U.S. firms.      

Consistent with this analysis, former Obama administration Defense Secretary Leon Panetta and former Trump administration Director of National Intelligence Dan Coats argued in a letter to U.S. House leadership (see here) that “imposing severe restrictions solely on U.S. giants will pave the way for a tech landscape dominated by China — echoing a position voiced by the Big Tech companies themselves.”

The national-security arguments against current antitrust legislative proposals, like the critiques of the unfounded FTC v. Qualcomm case, represent an alignment between sound antitrust policy and national-security analysis. Unfounded antitrust attacks on efficient business practices by large firms that help maintain U.S. technological leadership in key areas undermine both principled antitrust and national security.

Conclusion

Enlightened antitrust enforcement, centered on consumer welfare, can and should be read in a manner that is harmonious with national-security interests.

The cooperation between U.S. federal antitrust enforcers and the DOD in assessing defense-industry mergers and joint ventures is, generally speaking, an example of successful harmonization. This success reflects the fact that antitrust enforcers carry out their reviews of those transactions with an eye toward accommodating efficiencies that advance defense goals without sacrificing consumer welfare. Close antitrust-agency consultation with DOD is key to that approach.

Unfortunately, federal enforcement directed toward efficient intellectual-property licensing, as manifested in the Qualcomm case, reflects a disharmony between antitrust and national security. This disharmony could be eliminated if DOJ and the FTC adopted a dynamic view of intellectual property and the substantial economic-welfare benefits that flow from restrictive patent-licensing transactions.

In sum, a dynamic analysis reveals that consumer welfare is enhanced, not harmed, by not subjecting such licensing arrangements to antitrust threat. A more permissive approach to licensing is thus consistent with principled antitrust and with the national security interest of protecting and promoting strong American intellectual property (and, in particular, patent) protection. The DOJ and the FTC should keep this in mind and make appropriate changes to their IP-antitrust policies forthwith.

Finally, proposed federal antitrust legislation would bring about statutory changes that would simultaneously displace consumer welfare considerations and undercut national security interests. As such, national security is supported by rejecting unsound legislation, in order to keep in place consumer-welfare-based antitrust enforcement.

The acceptance and implementation of due-process standards confer a variety of welfare benefits on society. As Christopher Yoo, Thomas Fetzer, Shan Jiang, and Yong Huang explain, strong procedural due-process protections promote: (1) compliance with basic norms of impartiality; (2) greater accuracy of decisions; (3) stronger economic growth; (4) increased respect for government; (5) better compliance with the law; (6) better control of the bureaucracy; (7) restraints on the influence of special-interest groups; and (8) reduced corruption.  

Recognizing these benefits (and consistent with the long Anglo-American tradition of recognizing due-process rights that dates back to Magna Carta), the U.S. government (USG) has long been active in advancing the adoption of due-process principles by competition-law authorities around the world, working particularly through the Organisation for Economic Co-operation and Development (OECD) and the International Competition Network (ICN). More generally, due process may be seen as an aspect of the rule of law, which is as important in antitrust as in other legal areas.

The USG has supported OECD Competition Committee work on due-process safeguards which began in 2010, and which culminated in the OECD ministers’ October 2021 adoption of a “Recommendation on Transparency and Procedural Fairness in Competition Law Enforcement.” This recommendation calls for: (1) competition and predictability in competition-law enforcement; (2) independence, impartiality, and professionalism of competition authorities; (3) non-discrimination, proportionality, and consistency in the treatment of parties subject to scrutiny; (4) timeliness in handling cases; (5) meaningful engagement with parties (including parties’ right to respond and be heard); (6) protection of confidential and privileged information; (7) impartial judicial review of enforcement decisions; and (8) periodic review of policies, rules, procedures, and guidelines, to ensure that they are aligned with the preceding seven principles.

The USG has also worked through the International Competition Network (ICN) to generate support for the acceptance of due-process principles by ICN member competition agencies and their governments. In describing ICN due-process initiatives, James Rill and Jana Seidl have explained that “[t]he current challenge is to determine the extent to which the ICN, as a voluntary organization, can or should establish mechanisms to evaluate implementation of … [due process] norms by its members and even non-members.”

In 2019, the ICN announced creation of a Framework for Competition Agency Procedures (CAP), open to both ICN and non-ICN national and multinational (most prominently, the EU’s Directorate General for Competition) competition agencies. The CAP essentially embodied the principles of a June 2018 U.S. Justice Department (DOJ) framework proposal. A September 2021 CAP Report (footnotes omitted) issued at an ICN steering-group meeting noted that the CAP had 73 members, and summarized the history and goals of the CAP as follows:

The ICN CAP is a non-binding, opt-in framework. It makes use of the ICN infrastructure to maximize visibility and impact while minimizing the administrative burden for participants that operate in different legal regimes and enforcement systems with different resource constraints. The ICN CAP promotes agreement among competition agencies worldwide on fundamental procedural norms. The Multilateral Framework for Procedures project, launched by the US Department of Justice in 2018, was the starting point for what is now the ICN CAP.

The ICN CAP rests on two pillars: the first pillar is a catalogue of fundamental, consensus principles for fair and effective agency procedures that reflect the broad consensus within the global competition community. The principles address: non-discrimination, transparency, notice of investigations, timely resolution, confidentiality protections, conflicts of interest, opportunity to defend, representation, written decisions, and judicial review.

The second pillar of the ICN CAP consists of two processes: the “CAP Cooperation Process,” which facilitates a dialogue between participating agencies, and the “CAP Review Process,” which enhances transparency about the rules governing participants’ investigation and enforcement procedures.

The ICN CAP template is the practical implementation tool for the CAP. Participants each submit CAP templates, outlining how their agencies adhere to each of the CAP principles. The templates allow participants to share and explain important features of their systems, including links and other references to related materials such as legislation, rules, regulations, and guidelines. The CAP templates are a useful resource for agencies to consult when they would like to gain a quick overview of other agencies’ procedures, benchmark with peer agencies, and develop new processes and procedures.

Through the two pillars and the template, the CAP provides a framework for agencies to affirm the importance of the CAP principles, to confer with other jurisdictions, and to illustrate how their regulations and guidelines adhere to those principles.

In short, the overarching goal of the ICN CAP is to give agencies a “nudge” to implement due-process principles by encouraging consultation with peer CAP members and exposing to public view agencies’ actual due-process record. The extent to which agencies will prove willing to strengthen their commitment to due process because of the CAP, or even join the CAP, remains to be seen. (China’s competition agency, the State Administration for Market Regulation (SAMR), has not joined the ICN CAP.)

Antitrust, Due Process, and the Rule of Law at the DOJ and the FTC  

Now that the ICN CAP and OECD recommendation are in place, it is important that the DOJ and Federal Trade Commission (FTC), as long-time international promoters of due process, lead by example in adhering to all of those multinational instruments’ principles. A failure to do so would, in addition to having negative welfare consequences for affected parties (and U.S. economic welfare), undermine USG international due-process advocacy. Less effective advocacy efforts could, of course, impose additional costs on American businesses operating overseas, by subjecting them to more procedurally defective foreign antitrust prosecutions than otherwise.

With those considerations in mind, let us briefly examine the current status of due-process protections afforded by the FTC and DOJ. Although traditionally robust procedural safeguards remain strong overall, some worrisome developments during the first year of the Biden administration merit highlighting. Those developments implicate classic procedural issues and some broader rule of law concerns. (This commentary does not examine due-process and rule-of-law issues associated with U.S. antitrust enforcement at the state level, a topic that warrants scrutiny as well.)

The FTC

  • New FTC leadership has taken several actions that have unfortunate due-process and rule-of-law implications (many of them through highly partisan 3-2 commission votes featuring strong dissents).

Consider the HSR Act, a Congressional compromise that gave enforcers advance notice of deals and parties the benefit of repose. HSR review [at the FTC] now faces death by a thousand cuts. We have hit month nine of a “temporary” and “brief” suspension of early termination. Letters are sent to parties when their waiting periods expire, warning them to close at their own risk. Is the investigation ongoing? Is there a set amount of time the parties should wait? No one knows! The new prior approval policy will flip the burden of proof and capture many deals below statutory thresholds. And sprawling investigations covering non-competition concerns exceed our Clayton Act authority.

These policy changes impose a gratuitous tax on merger activity – anticompetitive and procompetitive alike. There are costs to interfering with the market for corporate control, especially as we attempt to rebound from the pandemic. If new leadership wants the HSR Act rewritten, they should persuade Congress to amend it rather than taking matters into their own hands.

Uncertainty and delay surrounding merger proposals and new merger-review processes that appear to flaunt tension with statutory commands are FTC “innovations” that are in obvious tension with due-process guarantees.

  • FTC rulemaking initiatives have due-process and rule-of-law problems. As Commissioner Wilson noted (footnotes omitted), “[t]he [FTC] majority changed our rules of practice to limit stakeholder input and consolidate rulemaking power in the chair’s office. In Commissioner [Noah] Phillips’ words, these changes facilitate more rules, but not better ones.” Lack of stakeholder input offends due process. Even more serious, however, is the fact that far-reaching FTC competition rules are being planned (see the December 2021 FTC Statement of Regulatory Priorities). FTC competition rulemaking is likely beyond its statutory authority and would fail a cost-benefit analysis (see here). Moreover, even if competition rules survived, they would offend the rule of law (see here) by “lead[ing] to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency.”
  • The FTC’s July 2021 withdrawal of its 2015 “Statement of Enforcement Principles Regarding ‘Unfair Methods of Competition’ [UMC] Under Section 5 of the FTC Act” likewise undercuts the rule of law (see here). The 2015 Statement had tended to increase predictability in enforcement by tying the FTC’s exercise of its UMC authority to well-understood antitrust rule-of-reason principles and the generally accepted consumer welfare standard. By withdrawing the statement (over the dissents of Commissioners Wilson and Phillips) without promulgating a new policy, the FTC majority reduced enforcement guidance and generated greater legal uncertainty. The notion that the FTC may apply the UMC concept in an unbounded fashion lacks legal principle and threatens to chill innovative and welfare-enhancing business conduct.
  • Finally, the FTC’s abrupt September 2021 withdrawal of its approval of jointly issued 2020 DOJ-FTC Vertical Merger Guidelines (again over a dissent by Commissioners Wilson and Phillips), offends the rule of law in three ways. As Commissioner Wilson explains, it engenders confusion as to FTC policies regarding vertical-merger analysis going forward; it appears to reflect flawed economic thinking regarding vertical integration (which may in turn lead to enforcement error); and it creates a potential tension between DOJ and FTC approaches to vertical acquisitions (the third concern may disappear if and when DOJ and FTC agree to new merger guidelines).  

The DOJ

As of now, the Biden administration DOJ has not taken as many actions that implicate rule-of-law and due-process concerns. Two recent initiatives with significant rule-of-law implications, however, deserve mention.

  • First, on Dec. 6, 2021, DOJ suddenly withdrew a 2019 policy statement on “Licensing Negotiations and Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments.” In so doing, DOJ simultaneously released a new draft policy statement on the same topic, and requested public comments. The timing of the withdrawal was peculiar, since the U.S. Patent and Trademark Office (PTO) and the National Institute of Standards and Technology (NIST)—who had joined with DOJ in the 2019 policy statement (which itself had replaced a 2013 policy statement)—did not yet have new Senate-confirmed leadership and were apparently not involved in the withdrawal. What’s more, DOJ originally requested that public comments be filed by the beginning of January, a ridiculously short amount of time for such a complex topic. (It later relented and established an early February deadline.) More serious than these procedural irregularities, however, are two new features of the Draft Policy Statement: (1) its delineation of a suggested private-negotiation framework for patent licensing; and (2) its assertion that standard essential patent (SEP) holders essentially forfeit the right to seek an injunction. These provisions, though not binding, may have a coercive effect on some private negotiators, and they problematically insert the government into matters that are appropriately the province of private businesses and the courts. Such an involvement by government enforcers in private negotiations, which treats one category of patents (SEPs) less favorably than others, raises rule-of-law questions.
  • Second, in January 2018, DOJ and the FTC jointly issued a “Request for Information on Merger Enforcement” [RIF] that contemplated the issuance of new merger guidelines (see my recent analysis, here). The RIF was chock full of numerous queries to prospective commentators that generally reflected a merger-skeptical tone. This suggests a predisposition to challenge mergers that, if embodied in guidelines language, could discourage some (or perhaps many) non-problematic consolidations from being proposed. New merger guidelines that impliedly were anti-merger would be a departure from previous guidelines, which stated in neutral fashion that they would consider both the anticompetitive risks and procompetitive benefits of mergers being reviewed. A second major concern is that the enforcement agencies might produce long and detailed guidelines containing all or most of the many theories of competitive harm found in the RIF. Overly complex guidelines would not produce any true guidance to private parties, inconsistent with the principle that individuals should be informed what the law is. Such guidelines also would give enforcers greater flexibility to selectively pick and choose theories best suited to block particular mergers. As such, the guidelines might be viewed by judges as justifications for arbitrary, rather than principled, enforcement, at odds with the rule of law.    

Conclusion

No man is an island entire of itself.” In today’s world of multinational antitrust cooperation, the same holds true for competition agencies. Efforts to export due process in competition law, which have been a USG priority for many years, will inevitably falter if other jurisdictions perceive the FTC and DOJ as not practicing what they preach.

It is to be hoped that the FTC and DOJ will take into account this international dimension in assessing the merits of antitrust “reforms” now under consideration. New enforcement policies that sow delay and uncertainty undermine the rule of law and are inconsistent with due-process principles. The consumer welfare harm that may flow from such deficient policies may be substantial. The agency missteps identified above should be rectified and new polices that would weaken due-process protections and undermine the rule of law should be avoided.              

President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.

Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.

Life Sciences: USTR Campaigns Against Intellectual-Property Rights

In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.

Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future. 

Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.  

This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.

Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition

In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.

It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.

One of Illumina’s few competitors in the global market is the BGI Group, a China-based company that, in 2013, acquired Complete Genomics, a U.S. target that Illumina pursued but relinquished due to anticipated resistance from the FTC in the merger-review process.  The transaction was then cleared by the Committee on Foreign Investment in the United States (CFIUS).

The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention. 

Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible. 

If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively. 

Wireless Communications: DOJ Takes Aim at Standard-Essential Patents

Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries.  It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.

In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.

The inability to block infringers disrupts this equilibrium by signaling to potential licensees that wireless technologies developed by others can be used at-will, with the terms of use to be negotiated through costly and protracted litigation. A no-injunction rule would discourage innovation while encouraging delaying tactics favored by well-resourced device manufacturers (including some of the world’s largest companies by market capitalization) that occupy bottleneck pathways to lucrative retail markets in the United States, China, and elsewhere.

Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.

Conclusion

From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.

This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.

Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.

As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.

These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes. 

The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement. 

  1. First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility. 
  2. Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized. 
  3. Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty. 

As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions. 

To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract. 

Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration

Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.

As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.

This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.

IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes

Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.

More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.

Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.

The Importance of Intellectual Property Rights for Startup Activity

Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this: 

  1. Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties; 
  2. Patent rights can be the basis for securitization, facilitating access to startup funding; and
  3. Patent rights drive venture capital (VC) investment.

While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here). 

The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities.  In contrast, startup companies—almost by definition—must outsource production.

Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it

SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.

Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis

[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.

In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.

The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions 

The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages. 

IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price. 

In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.

At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.

This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it

Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that: 

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:

What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation. 

Conclusion

In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.

For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely. 

The signatories to the comments are:

Alden F. AbbottSenior Research Fellow, Mercatus Center
George Mason University
Former General Counsel, U.S. Federal Trade Commission
Jonathan BarnettTorrey H. Webb Professor of Law
University of Southern California
Ronald A. CassDean Emeritus, School of Law
Boston University
Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe ColangeloJean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics
University of Basilicata and LUISS (Italy)
Richard A. EpsteinLaurence A. Tisch Professor of Law
New York University
Bowman HeidenExecutive Director, Tusher Initiative at the Haas School of Business
University of California, Berkeley
Justin (Gus) HurwitzProfessor of Law
University of Nebraska
Thomas A. LambertWall Chair in Corporate Law and Governance
University of Missouri
Stan J. LiebowitzAshbel Smith Professor of Economics
University of Texas at Dallas
John E. LopatkaA. Robert Noll Distinguished Professor of Law
Penn State University
Keith MallinsonFounder and Managing Partner
WiseHarbor
Geoffrey A. MannePresident and Founder
International Center for Law & Economics
Adam MossoffProfessor of Law
George Mason University
Kristen Osenga Austin E. Owen Research Scholar and Professor of Law
University of Richmond
Vernon L. SmithGeorge L. Argyros Endowed Chair in Finance and Economics
Chapman University
Nobel Laureate in Economics (2002)
Daniel F. SpulberElinor Hobbs Distinguished Professor of International Business
Northwestern University
David J. TeeceThomas W. Tusher Professor in Global Business
University of California, Berkeley
Joshua D. WrightUniversity Professor of Law
George Mason University
Former Commissioner, U.S. Federal Trade Commission
John M. YunAssociate Professor of Law
George Mason University
Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission 

European Union (EU) legislators are now considering an Artificial Intelligence Act (AIA)—the original draft of which was published by the European Commission in April 2021—that aims to ensure AI systems are safe in a number of uses designated as “high risk.” One of the big problems with the AIA is that, as originally drafted, it is not at all limited to AI, but would be sweeping legislation covering virtually all software. The EU governments seem to have realized this and are trying to fix the proposal. However, some pressure groups are pushing in the opposite direction. 

While there can be reasonable debate about what constitutes AI, almost no one would intuitively consider most of the software covered by the original AIA draft to be artificial intelligence. Ben Mueller and I covered this in more detail in our report “More Than Meets The AI: The Hidden Costs of a European Software Law.” Among other issues, the proposal would seriously undermine the legitimacy of the legislative process: the public is told that a law is meant to cover one sphere of life, but it mostly covers something different. 

It also does not appear that the Commission drafters seriously considered the costs that would arise from imposing the AIA’s regulatory regime on virtually all software across a sphere of “high-risk” uses that include education, employment, and personal finance.

The following example illustrates how the AIA would work in practice: A school develops a simple logic-based expert system to assist in making decisions related to admissions. It could be as basic as a Microsoft Excel macro that checks if a candidate is in the school’s catchment area based on the candidate’s postal code, by comparing the content of one column of a spreadsheet with another column. 

Under the AIA’s current definitions, this would not only be an “AI system,” but also a “high-risk AI system” (because it is “intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions” – Annex III of the AIA). Hence, to use this simple Excel macro, the school would be legally required to, among other things:

  1. put in place a quality management system;
  2. prepare detailed “technical documentation”;
  3. create a system for logging and audit trails;
  4. conduct a conformity assessment (likely requiring costly legal advice);
  5. issue an “EU declaration of conformity”; and
  6. register the “AI system” in the EU database of high-risk AI systems.

This does not sound like proportionate regulation. 

Some governments of EU member states have been pushing for a narrower definition of an AI system, drawing rebuke from pressure groups Access Now and Algorithm Watch, who issued a statement effectively defending the “all-software” approach. For its part, the European Council, which represents member states, unveiled compromise text in November 2021 that changed general provisions around the AIA’s scope (Article 3), but not the list of in-scope techniques (Annex I).

While the new definition appears slightly narrower, it remains overly broad and will create significant uncertainty. It is likely that many software developers and users will require costly legal advice to determine whether a particular piece of software in a particular use case is in scope or not. 

The “core” of the new definition is found in Article(3)(1)(ii), according to which an AI system is one that: “infers how to achieve a given set of human-defined objectives using learning, reasoning or modeling implemented with the techniques and approaches listed in Annex I.” This redefinition does precious little to solve the AIA’s original flaws. A legal inquiry focused on an AI system’s capacity for “reasoning” and “modeling” will tend either toward overinclusion or toward imagining a software reality that doesn’t exist at all. 

The revised text can still be interpreted so broadly as to cover virtually all software. Given that the list of in-scope techniques (Annex I) was not changed, any “reasoning” implemented with “Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems” (i.e., all software) remains in scope. In practice, the combined effect of those two provisions will be hard to distinguish from the original Commission draft. In other words, we still have an all-software law, not an AI-specific law. 

The AIA deliberations highlight two basic difficulties in regulating AI. First, it is likely that many activists and legislators have in mind science-fiction scenarios of strong AI (or “artificial general intelligence”) when pushing for regulations that will apply in a world where only weak AI exists. Strong AI is AI that is at least equal to human intelligence and is therefore capable of some form of agency. Weak AI is akin to  software techniques that augment human processing of information. For as long as computer scientists have been thinking about AI, there have been serious doubts that software systems can ever achieve generalized intelligence or become strong AI. 

Thus, what’s really at stake in regulating AI is regulating software-enabled extensions of human agency. But leaving aside the activists who explicitly do want controls on all software, lawmakers who promote the AIA have something in mind conceptually distinct from “software.” This raises the question of whether the “AI” that lawmakers imagine they are regulating is actually a null set. These laws can only regulate the equivalent of Excel spreadsheets at scale, and lawmakers need to think seriously about how they intervene. For such interventions to be deemed necessary, there should at least be quantifiable consumer harms that require redress. Focusing regulation on such broad topics as “AI” or “software” is almost certain to generate unacceptable unseen costs.

Even if we limit our concern to the real, weak AI, settling on an accepted “scientific” definition will be a challenge. Lawmakers inevitably will include either too much or too little. Overly inclusive regulation may seem like a good way to “future proof” the rules, but such future-proofing comes at the cost of significant legal uncertainty. It will also come at the cost of making some uses of software too costly to be worthwhile.

This post is the second in a planned series. The first installment can be found here.

In just over a century since its dawn, liberalism had reshaped much of the world along the lines of individualism, free markets, private property, contract, trade, and competition. A modest laissez-faire political philosophy that had begun to germinate in the minds of French Physiocrats in the early 18th century had, scarcely 150 years later, inspired the constitution of the world’s nascent leading power, the United States. But it wasn’t all plain sailing, as liberalism’s expansion eventually galvanized strong social, political, cultural, economic and even spiritual opposition, which coalesced around two main ideologies: socialism and fascism.

In this post, I explore the collectivist backlash against liberalism, its deeper meaning from the perspective of political philosophy, and the main features of its two main antagonists—especially as they relate to competition and competition regulation. Ultimately, the purpose is to show that, in trying to respond to the collectivist threat, successive iterations of neoliberalism integrated some of collectivism’s key postulates in an attempt to create a synthesis between opposing philosophical currents. Yet this “mostly” liberal synthesis, which serves as the philosophical basis of many competition systems today, is afflicted with the same collectivist flaws that the synthesis purported to overthrow (as I will elaborate in subsequent posts).

The Collectivist Backlash

By the early 20th century, two deeply illiberal movements bent on exposing and demolishing the fallacies and contradictions of liberalism had succeeded in capturing the imagination and support of the masses. These collectivist ideologies were Marxian socialism/communism on the left and fascism/Nazism on the right. Although ultimately distinct, they both rejected the basic postulates of classical liberalism. 

Socially, both agreed that liberalism uprooted traditional ways of life and dissolved the bonds of solidarity that had hitherto governed social relationships. This is the view expressed, e.g., in Karl Polanyi’s influential book The Great Transformation, in which the Christian socialist Polanyi contends that “disembedded” liberal markets would inevitably come to be governed again by the principles of solidarity and reciprocity (under socialism/communism). Similarly, although not technically a work on political economy or philosophy, Knut Hamsun’s 1917 novel Growth of the Soil perfectly captures the right’s rejection of liberal progress, materialism, industrialization, and the idealization of traditional bucolic life. The Norwegian Hamsun, winner of the 1920 Nobel Prize in Literature, later became an enthusiastic supporter of the Third Reich. 

Politically and culturally, Marxist historical materialism posited that liberal democracy (individual freedoms, periodic elections, etc.) and liberal culture (literature, art, cinema) served the interests of the economically dominant class: the bourgeoisie, i.e., the owners of the means of production. Fascists and Nazis likewise deplored liberal democracy as a sign of decadence and weakness and viewed liberal culture as an oxymoron: a hotbed of degeneracy built on the dilution of national and racial identities. 

Economically, the more theoretically robust leftist critiques rallied around Marx’ scientific socialism, which held that capitalism—the economic system that served as the embodiment of a liberal social order built on private property, contract, and competition—was exploitative and doomed to consume itself. From the right, it was argued that liberalism enabled individual interest to override what was good for the collective—an unpardonable sin in the eyes of an ideology built around robust nodes of collectivist identity, such as nation, race, and history.

A Recurrent Civilizational Struggle

The rise of socialism and fascism marked the beginning of a civilizational shift that many have referred to as the lowest ebb of liberalism. By the 1930s, totalitarian regimes utterly incompatible with a liberal worldview were in place in several European countries, such as Italy, Russia, Germany, Portugal, Spain, and Romania. As Austrian economist Ludwig Von Mises lamented, liberals and liberal ideas—at least, in the classical sense—had been driven to the fringes of society and academia, subject of scorn and ridicule. Even the liberally oriented, like economist John Maynard Keynes, were declaring the “end of laissez-faire.” 

At its most basic level, I believe that the conflict can be understood, from a philosophical perspective, as an iteration of the recurrent struggle between individualism and collectivism.

For instance, the German sociologist Ferdinand Tonnies has described the perennial tension between two elementary ways of conceiving the social order: Gesellschaft and Gemeinschaft. Gesellschaft refers to societies made up of individuals held together by formal bonds, such as contracts, whereas Gemeinschaft refers to communities held together by organic bonds, such as kinship, which function together as parts of an integrated whole. American law professor David Gerber explains that, from the Gemeinschaft perspective, competition was seen as an enemy:

Gemeinschaft required co-operation and the accommodation of individual interests to the commonwealth, but competition, in contrast, demanded that individuals be concerned first and foremost with their own self-interest. From this communitarian perspective, competition looked suspiciously like exploitation. The combined effect of competition and of political and economic inequality was that the strong would get stronger, the weak would get weaker, and the strong would use their strength to take from the weak.

Tonnies himself thought that dominant liberal notions of Gesellschaft would inevitably give way to greater integration of a socialist Gemeinschaft. This was somewhat reminiscent of Polanyi’s distinction between embedded and disembedded markets; Karl Popper’s “open” and “closed” societies; and possibly, albeit somewhat more remotely, David Hume’s distinction between “concord” and “union.” While we should be wary of reductivism, a common theme underlying these works (at least two of which are not liberal) is the conflict between opposing views of society: one that posits the subordination of the individual to some larger community or group versus another that anoints the individual’s well-being as the ultimate measure of the value of social arrangements. That basic tension, in turn, reverberates across social and economic questions, including as they relate to markets, competition, and the functions of the state.

 Competition Under Marxism

Karl Marx argued that the course of history was determined by material relations among the social classes under any given system of production (historical materialism and dialectical materialism, respectively). Under that view, communism was not a desirable “state of affairs,” but the inevitable consequence of social forces as they then existed. As Marx and Friedrich Engels wrote in The Communist Manifesto:

Communism is for us not a state of affairs which is to be established, an ideal to which reality [will] have to adjust itself. We call communism the real movement which abolishes the present state of things. The conditions of this movement result from the premises now in existence.

Thus, following the ineluctable laws of history, which Marx claimed to have discovered, capitalism would inevitably come to be replaced by socialism and, subsequently, communism. Under socialism, the means of production would be controlled not by individuals interacting in a free market, but by the political process under the aegis of the state, with the corollary that planning would come to substitute for competition as the economy’s steering mechanism. This would then give way to communism: a stateless utopia in which everything would be owned by the community and where there would be no class divisions. This would come about as a result of the interplay of several factors inherent to capitalism, such as the exploitation of the working class and the impossibility of sustained competition.

Per Marx, under capitalism, owners of the means of production (i.e., the capitalists or the bourgeoisie) appropriate the surplus value (i.e., the difference between the sale price of a product and the cost to produce it) generated by workers. Thus, the lower the wages and the longer the working hours of the worker, the greater the profit accrued to the capitalist. This was not an unfortunate byproduct that could be reformed, Marx posited, but a central feature of the system that was solvable only through revolution. Moreover, the laws, culture, media, politics, faith, and other institutions that might ordinarily open alternative avenues to nonviolent resolution of class tensions (the “super-structure”) were themselves byproducts of the underlying material relations of production (“structure” or “base”), and thus served to justify and uphold them.

The Marxian position further held that competition—the lodestar and governing principle of the capitalist economy—was, like the system itself, unsustainable. It would inevitably end up cannibalizing itself. But the claim is a bit more subtle than critics of communism often assume. As Leon Trotsky wrote in the 1939 pamphlet Marxism in our time:

Relations between capitalists, who exploit the workers, are defined by competition, which for long endures as the mainspring of capitalist progress.

Two notions expressed seamlessly in Trotsky’s statement need to be understood about the Marxian perception of competition. The first is that, since capitalism is exploitative of workers and competition among capitalists is the engine of capitalism, competition is itself effectively a mechanism of exploitation. Capitalists compete through the cheapening of commodities and the subsequent reinvestment of the surplus appropriated from labor into the expansion of productivity. The most exploitative capitalist, therefore, generally has the advantage (this hinges, of course, largely on the validity of the labor theory of value).

At the same time, however, Marxists (including Marx himself) recognized the economic and technological progress brought about through capitalism and competition. This is what Trotsky means when he refers to competition as the “mainspring of capitalist progress” and, by extension, the “historical justification of the capitalist.” The implication is that, if competition were to cease, the entire capitalist edifice and the political philosophy undergirding it (liberalism) would crumble, as well.

Whereas liberalism and competition were intertwined, liberalism and monopoly could not coexist. Instead, monopolists demanded—and, due to their political clout, were able to obtain—an increasingly powerful central state capable of imposing protective tariffs and other measures for their benefit and protection. Trotsky again:

The elimination of competition by monopoly marks the beginning of the disintegration of capitalist society. Competition was the creative mainspring of capitalism and the historical justification of the capitalist. By the same token the elimination of competition marks the transformation of stockholders into social parasites. Competition had to have certain liberties, a liberal atmosphere, a regime of democracy, of commercial cosmopolitanism. Monopoly needs as authoritative government as possible, tariff walls, “its own” sources of raw materials and arenas of marketing (colonies). The last word in the disintegration of monopolistic capital is fascism.

Marxian theory posited that this outcome was destined to happen for two reasons. First, because:

The battle of competition is fought by cheapening of commodities. The cheapness of commodities depends, ceteris paribus, on the productiveness of labor, and this again on the scale of production. Therefore, the larger capital beats the smaller.

In other words, competition stimulated the progressive development of productivity, which depended on the scale of production, which depended, in turn, on firm size. Ultimately, therefore, competition ended up producing a handful of large companies that would subjugate competitors and cannibalize competition. Thus, the more wealth that capitalism generated—and Marx had no doubts that capitalism was a wealth-generating machine—the more it sowed the seeds of its own destruction. Hence:

While stimulating the progressive development of technique, competition gradually consumes, not only the intermediary layers but itself as well. Over the corpses and the semi-corpses of small and middling capitalists, emerges an ever-decreasing number of ever more powerful capitalist overlords. Thus, out of “honest”, “democratic”, “progressive” competition grows irrevocably “harmful”, “parasitic”, “reactionary” monopoly.

The second reason Marxists believed the downfall of capitalism was inevitable is that the capitalists squeezed out of the market by the competitive process would become proletarians, which would create a glut of labor (“a growing reserve army of the unemployed”), which would in turn depress wages. This process of proletarianization, combined with the “revolutionary combination by association” of workers in factories would raise class consciousness and ultimately lead to the toppling of capitalism and the ushering in of socialism.

Thus, there is a clear nexus in Marxian theory between the end of competition and the end of capitalism (and therefore liberalism), whereby monopoly is deduced from the inherent tendencies of capitalism, and the end of capitalism, in turn, is deduced from the ineluctable advent of monopoly. What follows (i.e., socialism and communism) are collectivist systems that purport to be run according to the principles of solidarity and cooperation (“from each according to his abilities, to each according to his needs”), where there is therefore no place (and no need) for competition. Instead, the Marxian Gemeinschaft would organize the economy around rationalistic lines, substituting cut-throat competition for centralized command by the state (later, the community) that would rein in hitherto uncontrollable economic forces in a heroic victory over the chaos and unpredictability of capitalism. This would, of course, also bring about the end of liberalism, with individualism, private property, and other liberal freedoms jettisoned as mouthpieces of bourgeoisie class interests. Chairman Mao Zedong put it succinctly:

We must affirm anew the discipline of the Party, namely:

1. The individual is subordinate to the organization;

2. The minority is subordinate to the majority.

Competition Under Fascism/Nazism

Formidable as it was, the Marxian attack on liberalism was just one side of the coin. Decades after the articulation of Marxian theory in the mid-19th century, fascism—founded by former socialist Benito Mussolini in 1915—emerged as a militant alternative to both liberalism and socialism/communism.

In essence, fascism was, like communism, unapologetically collectivist. But whereas socialists considered class to be the relevant building block of society, fascists viewed the individual as part of a greater national, racial, and historical entity embodied in the state and its leadership. As Mussolini wrote in his 1932 pamphlet The Doctrine of Fascism:

Anti-individualistic, the Fascist conception of life stresses the importance of the State and accepts the individual only in so far as his interests coincide with those of the State, which stands for the conscience of the universal, will of man as a historic entity. It is opposed to classical liberalism […] liberalism denied the State in the name of the individual; Fascism reasserts.

Accordingly, fascism leads to an amalgamation of state and individual that is not just a politico-economic arrangement where the latter formally submits to the former, but a conception of life. This worldview is, of course, diametrically opposed to core liberal principles, such as personal freedom, individualism, and the minimal state. And surely enough, fascists saw these liberal values as signs of civilizational decadence (as expressed most notably by Oswald Spengler in The Decline of the West—a book that greatly inspired Nazi ideology). Instead, they posited that the only freedom worthy of the name existed within the state; that peace and cosmopolitanism were illusory; and that man was man only by virtue of his membership and contribution to nation and race.

But fascism was also opposed to Marxian socialism. At its most basic, the schism between the two worldviews can be understood in terms of the fascist rejection of materialism, which was a centerpiece of Marxian thought. Fascists denied the equivalence of material well-being and happiness, instead viewing man as fulfilled by hardship, war, and by playing his part in the grand tapestry of history, whose real protagonists were nation-states. While admitting the importance of economic life—e.g., of efficiency and technological innovation—fascists denied that material relations unequivocally determined the course of history, insisting instead on the preponderance of spiritual and heroic acts (i.e., acts with no economic motive) as drivers of social change. “Sanctity and heroism,” Mussolini wrote, are at the root of the fascist belief system, not material self-interest.  

This belief system also extended to economic matters, including competition. The Third Reich respected private property rights to some degree—among other reasons, because Adolf Hitler believed it would encourage creative competition and innovation. The Nazis’ overarching principle, however, was that all economic activity and all private property ultimately be subordinated to the “common good,” as interpreted by the state. In the words of Hitler:

I want everyone to keep what he has earned subject to the principle that the good of the community takes priority over that of the individual. But the State should retain control; every owner should feel himself to be an agent of the State. […] The Third Reich will always retain the right to control property owners.

The solution was a totalitarian system of government control that maintained private enterprise and profit incentives as spurs to efficient management, but narrowly circumscribed the traditional freedom of entrepreneurs. Economic historians Christoph Buchheim and Jonas Scherner have characterized the Nazis’ economic system as a “state-directed private ownership economy,” a partnership in which the state was the principal and the business was the agent. Economic activity would be judged according to the criteria of “strategic necessity and social utility,” encompassing an array of social, political, practical, and ideological goals. Some have referred to this as the “primacy of politics over economics” approach.

For instance, in supervising cross-border acquisitions (today’s mergers), the state “sought to suppress purely economic motives and to substitute some rough notion of ‘racial political’ priority when supervising industrial acquisitions or controlling existing German subsidiaries.” The Reich selectively applied the 1933 Act for the Formation of Compulsory Cartels in regulating cartels that had been formed under the Weimar Republic with the Cartel Act of 1923. But the legislation also appears to have been applied to protect small and medium-sized enterprises, an important source of the party’s political support, from ruinous competition. This is reminiscent of German industrialist and Nazi supporter Gustav Krupp’s “Third Form”: 

Between “free” economy and state capitalism there is a third form: the economy that is free from obligations, but has a sense of inner duty to the state. 

In short, competition and individual achievement had to be balanced with cooperation, mediated by the self-appointed guardians of the “general interest.” In contrast with Marxian socialism/communism, the long-term goal of the Nazi regime was not to abolish competition, but to harness it to serve the aims of the regime. As Franz Böhm—cofounder, with Walter Eucken, of the Freiburg School and its theory of “ordoliberalism”—wrote in his advice to the Nazi government:

The state regulatory framework gives the Reich economic leadership the power to make administrative commands applying either the indirect or the direct steering competence according to need, functionality, and political intent. The leadership may go as far as it wishes in this regard, for example, by suspending competition-based economic steering and returning to it when appropriate. 

Conclusion

After a century of expansion, opposition to classical liberalism started to coalesce around two nodes: Marxism on the left, and fascism/Nazism on the right. What ensued was a civilizational crisis of material, social, and spiritual proportions that, at its most basic level, can be understood as an iteration of the perennial struggle between individualism and collectivism. On the one hand, liberals like J.S. Mill had argued forcefully that “the only freedom which deserves the name, is that of pursuing our own good in our own way.” In stark contrast, Mussolini wrote that “fascism stands for liberty, and for the only liberty worth having, the liberty of the state and of the individual within the state.” The former position is rooted in a humanist view that enshrines the individual at the center of the social order; the latter in a communitarian ideal that sees him as subordinate to forces that supersede him.

As I have explained in the previous post, the philosophical undercurrents of both positions are ancient. A more immediate precursor of the collectivist standpoint, however, can be found in German idealism and particularly in Georg Wilhelm Friedrich Hegel. In The Philosophy of Right, he wrote:

A single person, I need hardly say, is something subordinate, and as such he must dedicate himself to the ethical whole. Hence, if the state claims life, the individual must surrender it. All the worth which the human being possesses […] he possesses only through the state.

This broader clash is reflected, directly and indirectly, in notions of competition and competition regulation. Classical liberals sought to liberate competition from regulatory fetters. Marxism “predicted” its downfall and envisioned a social order without it. Fascism/Nazism sought to wrest it from the hands of greedy self-interest and mold it to serve the many and the fluctuating objectives of the state and its vision of the common good

In the next post, I will discuss how this has influenced the neoliberal philosophy that is still at the heart of many competition systems today. I will argue that two strands of neoliberalism emerged, which each attempted to resolve the challenge of collectivism in distinct ways. 

One strand, associated with a continental understanding of liberalism and epitomized by the Freiburg School, sought to strike a “mostly liberal” compromise between liberalism and collectivism—a “Third Way” between opposites. In doing so, however, it may have indulged in some of the same collectivist vices that it initially sought to avoid— such as vast government discretion and the imposition of myriad “higher” goals on society. 

The other strand, represented by Anglo-American liberalism of the sort espoused by Friedrich Hayek and Milton Friedman, was less conciliatory. It attempted to reform, rather than reinvent, liberalism. Their prescriptions involved creating a strong legal framework conducive to economic efficiency against a background of limited government discretion, freedom, and the rule of law.

There has been a wave of legislative proposals on both sides of the Atlantic that purport to improve consumer choice and the competitiveness of digital markets. In a new working paper published by the Stanford-Vienna Transatlantic Technology Law Forum, I analyzed five such bills: the EU Digital Services Act, the EU Digital Markets Act, and U.S. bills sponsored by Rep. David Cicilline (D-R.I.), Rep. Mary Gay Scanlon (D-Pa.), Sen. Amy Klobuchar (D-Minn.) and Sen. Richard Blumenthal (D-Conn.). I concluded that all those bills would have negative and unaddressed consequences in terms of information privacy and security.

In this post, I present the main points from the working paper regarding two regulatory solutions: (1) mandating interoperability and (2) mandating device neutrality (which leads to a possibility of sideloading applications, a special case of interoperability.) The full working paper  also covers the risks of compulsory data access (by vetted researchers or by authorities).

Interoperability

Interoperability is increasingly presented as a potential solution to some of the alleged problems associated with digital services and with large online platforms, in particular (see, e.g., here and here). For example, interoperability might allow third-party developers to offer different “flavors” of social-media newsfeeds, with varying approaches to content ranking and moderation. This way, it might matter less than it does now what content moderation decisions Facebook or other platforms make. Facebook users could choose alternative content moderators, delivering the kind of news feed that those users expect.

The concept of interoperability is popular not only among thought leaders, but also among legislators. The DMA, as well as the U.S. bills by Rep. Scanlon, Rep. Cicilline, and Sen. Klobuchar, all include interoperability mandates.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. It is telling that supporters of interoperability mandates use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns, due to its prioritization of interoperability (see, e.g., here).

Using currently available technology to provide alternative interfaces or moderation services for social-media platforms, third-party developers would have to be able to access much of the platform content that is potentially available to a user. This would include not just content produced by users who explicitly agree to share their data with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It does not require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

There are several constraints for interoperability frameworks that must be in place to safeguard privacy and security effectively.

First, solutions should be targeted toward real users of digital services, without assuming away some common but inconvenient characteristics. In particular, solutions should not assume unrealistic levels of user interest and technical acumen.

Second, solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.

If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.

It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.

Art 6(1) of the original DMA proposal included some general interoperability provisions applicable to “gatekeepers”—i.e., the largest online platforms. Those interoperability mandates were somewhat limited – applying only to “ancillary services” (e.g., payment or identification services) or requiring only one-way data portability. However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately.

The drafts of the DMA adopted by the European Council and by the European Parliament attempt to address that, but they only allow gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament). This standard may be too high and could push gatekeepers to offer lower security to avoid liability for adopting measures that would be judged by EU institutions and the courts as going beyond what is strictly necessary or indispensable.

The more recent DMA proposal from the European Parliament goes significantly beyond the original proposal, mandating full interoperability of a number of “independent interpersonal communication services” and of social-networking services. The Parliament’s proposals are good examples of overly broad and irresponsible interoperability mandates. They would cover “any providers” wanting to interconnect with gatekeepers, without adequate vetting. The safeguard proviso mentioning “high level of security and personal data protection” does not come close to addressing the seriousness of the risks created by the mandate. Instead of facing up to the risks and ensuring that the mandate itself be limited in ways that minimize them, the proposal seems just to expect that the gatekeepers can solve the problems if they only “nerd harder.”

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

restrict or impede the capacity of a business user to access or interoperate with the same platform, operating system, hardware and software features that are available to the covered platform operator’s own products, services, or lines of business.

The language of the prohibition in Sen. Klobuchar’s American Innovation and Choice Online Act (AICOA) is similar (also in Section 2(b)(1)). Both ACIOA and AICOA allow for affirmative defenses that a service provider could use if sued under the statute. While those defenses mention privacy and security, they are narrow (“narrowly tailored, could not be achieved through a less discriminatory means, was nonpretextual, and was necessary”) and would not prevent service providers from incurring significant litigation costs. Hence, just like the provisions of the DMA, they would heavily incentivize covered service providers not to adopt the most effective protections of privacy and security.

Device Neutrality (Sideloading)

Article 6(1)(c) of the DMA contains specific provisions about “sideloading”—i.e., allowing installation of third-party software through alternative app stores other than the one provided by the manufacturer (e.g., Apple’s App Store for iOS devices). A similar express provision for sideloading is included in Sen. Blumenthal’s Open App Markets Act (Section 3(d)(2)). Moreover, the broad interoperability provisions in the other U.S. bills discussed above may also be interpreted to require permitting sideloading.

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS). By taking away the choice of a walled garden environment, a sideloading mandate will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny.

This is not to say that Apple’s app scrutiny is perfect, but it is reasonable for an ordinary user to prefer Apple’s approach because it provides greater security (see, e.g., here and here). Thus, a legislative choice to override the revealed preference of millions of users for a “walled garden” approach should not be made lightly. 

Privacy and security safeguards in the DMA’s sideloading provisions, as amended by the European Council and by the European Parliament, as well as in Sen. Blumenthal’s Open App Markets Act, share the same problem of narrowness as the safeguards discussed above.

There is a more general privacy and security issue here, however, that those safeguards cannot address. The proposed sideloading mandate would prohibit outright a privacy and security-protection model that many users rationally choose today. Even with broader exemptions, this loss will be genuine. It is unclear whether taking away this choice from users is justified.

Conclusion

All the U.S. and EU legislative proposals considered here betray a policy preference of privileging uncertain and speculative competition gains at the expense of introducing a new and clear danger to information privacy and security. The proponents of these (or even stronger) legislative interventions seem much more concerned, for example, that privacy safeguards are “not abused by Apple and Google to protect their respective app store monopoly in the guise of user security” (source).

Given the problems with ensuring effective enforcement of privacy protections (especially with respect to actors coming from outside the EU, the United States, and other broadly privacy-respecting jurisdictions), the lip service paid by the legislative proposals to privacy and security is not much more than that. Policymakers should be expected to offer a much more detailed vision of concrete safeguards and mechanisms of enforcement when proposing rules that come with significant and entirely predictable privacy and security risks. Such vision is lacking on both sides of the Atlantic.

I do not want to suggest that interoperability is undesirable. The argument of this paper was focused on legally mandated interoperability. Firms experiment with interoperability all the time—the prevalence of open APIs on the Internet is testament to this. My aim, however, is to highlight that interoperability is complex and exposes firms and their users to potentially large-scale cyber vulnerabilities.

Generalized obligations on firms to open their data, or to create service interoperability, can short-circuit the private ordering processes that seek out those forms of interoperability and sharing that pass a cost-benefit test. The result will likely be both overinclusive and underinclusive. It would be overinclusive to require all firms in the regulated class to broadly open their services and data to all interested parties, even where it wouldn’t make sense for privacy, security, or other efficiency reasons. It is underinclusive in that the broad mandate will necessarily sap regulated firms’ resources and deter them from looking for new innovative uses that might make sense, but that are outside of the broad mandate. Thus, the likely result is less security and privacy, more expense, and less innovation.