Archives For interoperability

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:

  1. Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
  2. Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
  3. Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
  4. Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.

This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.

How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.

The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.

It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).

Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.

But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.

It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.

U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.

It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.

Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.


[1] Milton Friedman, Free to Choose, 1980, pp.115-119

[2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.

Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”

In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.

That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.

The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.

The Antitrust Section’s Comments

As the ABA Antitrust Law Section observes:

The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.

Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.

Departing from established antitrust-law principles

Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”

Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.

The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.

Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.

It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:

Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Opaque language for opaque ideas

Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.

Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.

Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.

In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:

The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors

Conclusion

The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.

After years of debate and negotiations, European Lawmakers have agreed upon what will most likely be the final iteration of the Digital Markets Act (“DMA”), following the March 24 final round of “trilogue” talks. 

For the uninitiated, the DMA is one in a string of legislative proposals around the globe intended to “rein in” tech companies like Google, Amazon, Facebook, and Apple through mandated interoperability requirements and other regulatory tools, such as bans on self-preferencing. Other important bills from across the pond include the American Innovation and Choice Online Act, the ACCESS Act, and the Open App Markets Act

In many ways, the final version of the DMA represents the worst possible outcome, given the items that were still up for debate. The Commission caved to some of the Parliament’s more excessive demands—such as sweeping interoperability provisions that would extend not only to “ancillary” services, such as payments, but also to messaging services’ basic functionalities. Other important developments include the addition of voice assistants and web browsers to the list of Core Platform Services (“CPS”), and symbolically higher “designation” thresholds that further ensure the act will apply overwhelmingly to just U.S. companies. On a brighter note, lawmakers agreed that companies could rebut their designation as “gatekeepers,” though it remains to be seen how feasible that will be in practice. 

We offer here an overview of the key provisions included in the final version of the DMA and a reminder of the shaky foundations it rests on.

Interoperability

Among the most important of the DMA’s new rules concerns mandatory interoperability among online platforms. In a nutshell, digital platforms that are designated as “gatekeepers” will be forced to make their services “interoperable” (i.e., compatible) with those of rivals. It is argued that this will make online markets more contestable and thus boost consumer choice. But as ICLE scholars have been explaining for some time, this is unlikely to be the case (here, here, and here). Interoperability is not the panacea EU legislators claim it to be. As former ICLE Director of Competition Policy Sam Bowman has written, there are many things that could be interoperable, but aren’t. The reason is that interoperability comes with costs as well as benefits. For instance, it may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to the market and for consumers to be able to choose among them. Economists Michael L. Katz and Carl Shapiro concur:

Although compatibility has obvious benefits, obtaining and maintaining compatibility often involves a sacrifice in terms of product variety or restraints on innovation.

There are other potential downsides to interoperability.  For instance, a given set of interoperable standards might be too costly to implement and/or maintain; it might preclude certain pricing models that increase output; or it might compromise some element of a product or service that offers benefits specifically because it is not interoperable (such as, e.g., security features). Consumers may also genuinely prefer closed (i.e., non-interoperable) platforms. Indeed: “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play that belie simplistic characterizations of one being inherently superior to the other. 

Further, as Sam Bowman observed, narrowing choice through a more curated experience can also be valuable for users, as it frees them from having to research every possible option every time they buy or use some product (if you’re unconvinced, try turning off your spam filter for a couple of days). Instead, the relevant choice consumers exercise might be in choosing among brands. In sum, where interoperability is a desirable feature, consumer preferences will tend to push for more of it. However, it is fundamentally misguided to treat mandatory interoperability as a cure-all elixir or a “super tool” of “digital platform governance.” In a free-market economy, it is not—or, it should not—be up to courts and legislators to substitute for businesses’ product-design decisions and consumers’ revealed preferences with their own, based on diffuse notions of “fairness.” After all, if we could entrust such decisions to regulators, we wouldn’t need markets or competition in the first place.

Of course, it was always clear that the DMA would contemplate some degree of mandatory interoperability – indeed, this was arguably the new law’s biggest selling point. What was up in the air until now was the scope of such obligations. The Commission had initially pushed for a comparatively restrained approach, requiring interoperability “only” in ancillary services, such as payment systems (“vertical interoperability”). By contrast, the European Parliament called for more expansive requirements that would also encompass social-media platforms and other messaging services (“horizontal interoperability”). 

The problem with such far-reaching interoperability requirements is that they are fundamentally out of pace with current privacy and security capabilities. As ICLE Senior Scholar Mikolaj Barczentewicz has repeatedly argued, the Parliament’s insistence on going significantly beyond the original DMA’s proposal and mandating interoperability of messaging services is overly broad and irresponsible. Indeed, as Mikolaj notes, the “likely result is less security and privacy, more expenses, and less innovation.”The DMA’s defensers would retort that the law allows gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament) to protect safety and privacy (it is not yet clear which wording the final version has adopted). Either way, however, the standard may be too high and companies may very well offer lower security to avoid liability for adopting measures that would be judged by the Commission and the courts as going beyond what is “strictly necessary” or “indispensable.” These safeguards will inevitably be all the more indeterminate (and thus ineffectual) if weighed against other vague concepts at the heart of the DMA, such as “fairness.”

Gatekeeper Thresholds and the Designation Process

Another important issue in the DMA’s construction concerns the designation of what the law deems “gatekeepers.” Indeed, the DMA will only apply to such market gatekeepers—so-designated because they meet certain requirements and thresholds. Unfortunately, the factors that the European Commission will consider in conducting this designation process—revenues, market capitalization, and user base—are poor proxies for firms’ actual competitive position. This is not surprising, however, as the procedure is mainly designed to ensure certain high-profile (and overwhelmingly American) platforms are caught by the DMA.

From this perspective, the last-minute increase in revenue and market-capitalization thresholds—from 6.5 billion euros to 7.5 billion euros, and from 65 billion euros to 75 billion euros, respectively—won’t change the scope of the companies covered by the DMA very much. But it will serve to confirm what we already suspected: that the DMA’s thresholds are mostly tailored to catch certain U.S. companies, deliberately leaving out EU and possibly Chinese competitors (see here and here). Indeed, what would have made a difference here would have been lowering the thresholds, but this was never really on the table. Ultimately, tilting the European Union’s playing field against its top trading partner, in terms of exports and trade balance, is economically, politically, and strategically unwise.

As a consolation of sorts, it seems that the Commission managed to squeeze in a rebuttal mechanism for designated gatekeepers. Imposing far-reaching obligations on companies with no  (or very limited) recourse to escape the onerous requirements of the DMA would be contrary to the basic principles of procedural fairness. Still, it remains to be seen how this mechanism will be articulated and whether it will actually be viable in practice.

Double (and Triple?) Jeopardy

Two recent judgments from the European Court of Justice (ECJ)—Nordzucker and bpost—are likely to underscore the unintended effects of cumulative application of both the DMA and EU and/or national competition laws. The bpost decision is particularly relevant, because it lays down the conditions under which cases that evaluate the same persons and the same facts in two separate fields of law (sectoral regulation and competition law) do not violate the principle of ne bis in idem, also known as “double jeopardy.” As paragraph 51 of the judgment establishes:

  1. There must be precise rules to determine which acts or omissions are liable to be subject to duplicate proceedings;
  2. The two sets of proceedings must have been conducted in a sufficiently coordinated manner and within a similar timeframe; and
  3. The overall penalties must match the seriousness of the offense. 

It is doubtful whether the DMA fulfills these conditions. This is especially unfortunate considering the overlapping rules, features, and goals among the DMA and national-level competition laws, which are bound to lead to parallel procedures. In a word: expect double and triple jeopardy to be hotly litigated in the aftermath of the DMA.

Of course, other relevant questions have been settled which, for reasons of scope, we will have to leave for another time. These include the level of fines (up to 10% worldwide revenue, or 20% in the case of repeat offenses); the definition and consequences of systemic noncompliance (it seems that the Parliament’s draconian push for a general ban on acquisitions in case of systemic noncompliance has been dropped); and the addition of more core platform services (web browsers and voice assistants).

The DMA’s Dubious Underlying Assumptions

The fuss and exhilaration surrounding the impending adoption of the EU’s most ambitious competition-related proposal in decades should not obscure some of the more dubious assumptions which underpin it, such as that:

  1. It is still unclear that intervention in digital markets is necessary, let alone urgent.
  2. Even if it were clear, there is scant evidence to suggest that tried and tested ex post instruments, such as those envisioned in EU competition law, are not up to the task.
  3. Even if the prior two points had been established beyond any reasonable doubt (which they haven’t), it is still far from clear that DMA-style ex ante regulation is the right tool to address potential harms to competition and to consumers that arise in digital markets.

It is unclear that intervention is necessary

Despite a mounting moral panic around and zealous political crusading against Big Tech (an epithet meant to conjure antipathy and distrust), it is still unclear that intervention in digital markets is necessary. Much of the behavior the DMA assumes to be anti-competitive has plausible pro-competitive justifications. Self-preferencing, for instance, is a normal part of how platforms operate, both to improve the value of their core products and to earn returns to reinvest in their development. As ICLE’s Dirk Auer points out, since platforms’ incentives are to maximize the value of their entire product ecosystem, those that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product (the example of Facebook’s integration of Instagram is a case in point). Thus, while self-preferencing may, in some cases, be harmful, a blanket presumption of harm is thoroughly unwarranted

Similarly, the argument that switching costs and data-related increasing returns to scale (in fact, data generally entails diminishing returns) have led to consumer lock-in and thereby raised entry barriers has also been exaggerated to epic proportions (pun intended). As we have discussed previously, there are plenty of counterexamples where firms have easily overcome seemingly “insurmountable” barriers to entry, switching costs, and network effects to disrupt incumbents. 

To pick a recent case: how many of us had heard of Zoom before the pandemic? Where was TikTok three years ago? (see here for a multitude of other classic examples, including Yahoo and Myspace).

Can you really say, with a straight face, that switching costs between messaging apps are prohibitive? I’m not even that active and I use at least six such apps on a daily basis: Facebook Messenger, Whatsapp, Instagram, Twitter, Viber, Telegram, and Slack (it took me all of three minutes to download and start using Slack—my newest addition). In fact, chances are that, like me, you have always multihomed nonchalantly and had never even considered that switching costs were impossibly high (or that they were a thing) until the idea that you were “locked-in” by Big Tech was drilled into your head by politicians and other busybodies looking for trophies to adorn their walls.

What about the “unprecedented,” quasi-fascistic levels of economic concentration? First, measures of market concentration are sometimes anchored in flawed methodology and market definitions  (see, e.g., Epic’s insistence that Apple is a monopolist in the market for operating systems, conveniently ignoring that competition occurs at the smartphone level, where Apple has a worldwide market share of 15%—see pages 45-46 here). But even if such measurements were accurate, high levels of concentration don’t necessarily mean that firms do not face strong competition. In fact, as Nicolas Petit has shown, tech companies compete vigorously against each other across markets.

But perhaps the DMA’s raison d’etre rests less on market failure, but rather on a legal or enforcement failure? This, too, is misguided.

EU competition law is already up to the task

As Giuseppe Colangelo has argued persuasively (here and here), it is not at all clear that ex post competition regulation is insufficient to tackle anti-competitive behavior in the digital sector:

Ongoing antitrust investigations demonstrate that standard competition law still provides a flexible framework to scrutinize several practices described as new and peculiar to app stores. 

The recent Google Shopping decision, in which the Commission found that Google had abused its dominant position by preferencing its own online-shopping service in Google Search results, is a case in point (the decision was confirmed by the General Court and is now pending review before the European Court of Justice). The “self-preferencing” category has since been applied by other EU competition authorities. The Italian competition authority, for instance, fined Amazon 1 billion euros for preferencing its own distribution service, Fulfilled by Amazon, on the Amazon marketplace (i.e., Amazon.it). Thus, Article 102, which includes prohibitions on “applying dissimilar conditions to similar transactions,” appears sufficiently flexible to cover self-preferencing, as well as other potentially anti-competitive offenses relevant to digital markets (e.g., essential facilities).

For better or for worse, EU competition law has historically been sufficiently pliable to serve a range of goals and values. It has also allowed for experimentation and incorporated novel theories of harm and economic insights. Here, the advantage of competition law is that it allows for a more refined, individualized approach that can avoid some of the pitfalls of applying a one-size fits all model across all digital platforms. Those pitfalls include: harming consumers, jeopardizing the business models of some of the most successful and pro-consumer companies in existence, and ignoring the differences among platforms, such as between Google and Apple’s app stores. I turn to these issues next.

Ex ante regulation probably isn’t the right tool

Even if it were clear that intervention is necessary and that existing competition law was insufficient, it is not clear that the DMA is the right regulatory tool to address any potential harms to competition and consumers that may arise in the digital markets. Here, legislators need to be wary of unintended consequences, trade-offs, and regulatory fallibility. For one, It is possible that the DMA will essentially consolidate the power of tech platforms, turning them into de facto public utilities. This will not foster competition, but rather will make smaller competitors systematically dependent on so-called gatekeepers. Indeed, why become the next Google if you can just free ride off of the current Google? Why download an emerging messaging app if you can already interact with its users through your current one? In a way, then, the DMA may become a self-fulfilling prophecy. 

Moreover, turning closed or semi-closed platforms such as the iOS into open platforms more akin to Android blurs the distinctions among products and dampens interbrand competition. It is a supreme paradox that interoperability and sideloading requirements purportedly give users more choice by taking away the option of choosing a “walled garden” model. As discussed above, overriding the revealed preferences of millions of users is neither pro-competitive nor pro-consumer (but it probably favors some competitors at the expense of those two things). 

Nor are many of the other obligations contemplated in the DMA necessarily beneficial to consumers. Do users really not want to have default apps come preloaded on their devices and instead have to download and install them manually? Ditto for operating systems. What is the point of an operating system if it doesn’t come with certain functionalities, such as a web browser? What else should we unbundle—keyboard on iOS? Flashlight? Do consumers really want to choose from dozens of app stores when turning on their new phone for the first time? Do they really want to have their devices cluttered with pointless split-screens? Do users really want to find all their contacts (and be found by all their contacts) across all messaging services? (I switched to Viber because I emphatically didn’t.) Do they really want to have their privacy and security compromised because of interoperability requirements?Then there is the question of regulatory fallibility. As Alden Abott has written on the DMA and other ex ante regulatory proposals aimed at “reining in” tech companies:

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). 

This brings us back to the second point: without evidence that antitrust law is “not up to the task,” far-reaching and untested regulatory initiatives with potentially high error costs are put forth as superior to long-established, consumer-based antitrust enforcement. Yes, antitrust may have downsides (e.g., relative indeterminacy and slowness), but these pale in comparison to the DMA’s (e.g., large error costs resulting from high information requirements, rent-seeking, agency capture).

Conclusion

The DMA is an ambitious piece of regulation purportedly aimed at ensuring “fair and open digital markets.” This implies that markets are not fair and open; or that they risk becoming unfair and closed absent far-reaching regulatory intervention at EU level. However, it is unclear to what extent such assumptions are borne out by the reality of markets. Are digital markets really closed? Are they really unfair? If so, is it really certain that regulation is necessary? Has antitrust truly proven insufficient? It also implies that DMA-style ex ante regulation is necessary to tackle it, and that the costs won’t outweigh the benefits. These are heroic assumptions that have never truly been seriously put to the test. 

Considering such brittle empirical foundations, the DMA was always going to be a contentious piece of legislation. However, there was always the hope that EU legislators would show restraint in the face of little empirical evidence and high error costs. Today, these hopes have been dashed. With the adoption of the DMA, the Commission, Council, and the Parliament have arguably taken a bad piece of legislation and made it worse. The interoperability requirements in messaging services, which are bound to be a bane for user privacy and security, are a case in point.

After years trying to anticipate the whims of EU legislators, we finally know where we’re going, but it’s still not entirely sure why we’re going there.

The Senate Judiciary Committee is set to debate S. 2992, the American Innovation and Choice Online Act (or AICOA) during a markup session Thursday. If passed into law, the bill would force online platforms to treat rivals’ services as they would their own, while ensuring their platforms interoperate seamlessly.

The bill marks the culmination of misguided efforts to bring Big Tech to heel, regardless of the negative costs imposed upon consumers in the process. ICLE scholars have written about these developments in detail since the bill was introduced in October.

Below are 10 significant misconceptions that underpin the legislation.

1. There Is No Evidence that Self-Preferencing Is Generally Harmful

Self-preferencing is a normal part of how platforms operate, both to improve the value of their core products and to earn returns so that they have reason to continue investing in their development.

Platforms’ incentives are to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals.

As Geoff Manne concludes, the notion that it is harmful (notably to innovation) when platforms enter into competition with edge providers is entirely speculative. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

Consider a few examples from the empirical literature:

  1. Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook.
  2. Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally.
  3. Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games increase the potential for all independent game developers to profit from their games, even in the face of competition from first-party games.
  4. Finally, while Zhu and Liu (2018) is often held up as demonstrating harm from Amazon’s competition with third-party sellers on its platform, its findings are actually far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third‐ party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear.”

2. Interoperability Is Not Costless

There are many things that could be interoperable, but aren’t. The reason not everything is interoperable is because interoperability comes with costs, as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds.

As Sam Bowman has observed, there are often costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen: consumers will choose products that are not interoperable.

In short, we cannot infer from the mere absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

3. Consumers Often Prefer Closed Ecosystems

Digital markets could have taken a vast number of shapes. So why have they gravitated toward the very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones?

Indeed, if recent commentary is to be believed, it is the latter that should succeed, because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into that breach. But this does not seem to be happening in the digital economy.

The naïve answer is to say that the absence of “open” systems is precisely the problem. What’s harder is to try to actually understand why. As I have written, there are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and on consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision.

They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.

Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. What some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said it best when he quipped that economists always find a monopoly explanation for things that they simply fail to understand.

4. Data Portability Can Undermine Security and Privacy

As explained above, platforms that are more tightly controlled can be regulated by the platform owner to avoid some of the risks present in more open platforms. Apple’s App Store, for example, is a relatively closed and curated platform, which gives users assurance that apps will meet a certain standard of security and trustworthiness.

Along similar lines, there are privacy issues that arise from data portability. Even a relatively simple requirement to make photos available for download can implicate third-party interests. Making a user’s photos more broadly available may tread upon the privacy interests of friends whose faces appear in those photos. Importing those photos to a new service potentially subjects those individuals to increased and un-bargained-for security risks.

As Sam Bowman and Geoff Manne observe, this is exactly what happened with Facebook and its Social Graph API v1.0, ultimately culminating in the Cambridge Analytica scandal. Because v1.0 of Facebook’s Social Graph API permitted developers to access information about a user’s friends without consent, it enabled third-party access to data about exponentially more users. It appears that some 270,000 users granted data access to Cambridge Analytica, from which the company was able to obtain information on 50 million Facebook users.

In short, there is often no simple solution to implement interoperability and data portability. Any such program—whether legally mandated or voluntarily adopted—will need to grapple with these and other tradeoffs.

5. Network Effects Are Rarely Insurmountable

Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (see here, here, and here). But there are countless counterexamples where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I wrote in April 2019 (a year before the COVID-19 pandemic):

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Geoff Manne and Alec Stapp have put forward a multitude of other examples,  including: the demise of Yahoo; the disruption of early instant-messaging applications and websites; and MySpace’s rapid decline. In all of these cases, outcomes did not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet, this question is systematically omitted from most policy discussions.

6. Profits Facilitate New and Exciting Platforms

As I wrote in August 2020, the relatively closed model employed by several successful platforms (notably Apple’s App Store, Google’s Play Store, and the Amazon Retail Platform) allows previously unknown developers/retailers to rapidly expand because (i) users do not have to fear their apps contain some form of malware and (ii) they greatly reduce payments frictions, most notably security-related ones.

While these are, indeed, tremendous benefits, another important upside seems to have gone relatively unnoticed. The “closed” business model also gives firms significant incentives to develop new distribution mediums (smart TVs spring to mind) and to improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening here. For example, Apple and Google’s app stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks.” That is, they compete aggressively (among themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users.

This dynamic gives firms significant incentive to continue to attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video, and games was one of the driving forces behind the launch of the iPad.

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms, as would likely be the case under the American Innovation and Choice Online Act.

7. Large Market Share Does Not Mean Anticompetitive Outcomes

Scholars routinely cite the putatively strong concentration of digital markets to argue that Big Tech firms do not face strong competition. But this is a non sequitur. Indeed, as economists like Joseph Bertrand and William Baumol have shown, what matters is not whether markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, that alone will discipline incumbents’ behavior.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

Unfortunately, critics’ failure to meaningfully grapple with these issues serves to shape the “conventional wisdom” in tech-policy debates.

8. Vertical Integration Generally Benefits Consumers

Vertical behavior of digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many such critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of these few studies are regularly overstated and, even if taken at face value, represent a just minuscule fraction of the collected evidence, which overwhelmingly supports vertical integration.

There is strong and longstanding empirical evidence that vertical integration is competitively benign. This includes widely acclaimed work by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade, whose meta-analysis led them to conclude:

[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.

In short, there is a substantial body of both empirical and theoretical research showing that vertical integration (and the potential vertical discrimination and exclusion to which it might give rise) is generally beneficial to consumers. While it is possible that vertical mergers or discrimination could sometimes cause harm, the onus is on the critics to demonstrate empirically where this occurs. No legitimate interpretation of the available literature would offer a basis for imposing a presumption against such behavior.

9. There Is No Such Thing as Data Network Effects

Although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As was shown by the survey pf the empirical literature that Geoff Manne and I performed (published in the George Mason Law Review), data generally entails diminishing marginal returns:

Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace.

Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

10.  Antitrust Enforcement Has Not Been Lax

The popular narrative has it that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and expanding dominant firms’ profit margins at the expense of consumers. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition. But both beliefs—lax enforcement and increased anticompetitive concentration—wither under more than cursory scrutiny.

As Geoff Manne observed in his April 2020 testimony to the House Judiciary Committee:

The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies and private litigants, and there has been no shortage of federal and state investigations. The vast majority of Section 2 cases dismissed on the basis of the plaintiff’s failure to show anticompetitive effect were brought by private plaintiffs pursuing treble damages; given the incentives to bring weak cases, it cannot be inferred from such outcomes that antitrust law is ineffective. But, in any case, it is highly misleading to count the number of antitrust cases and, using that number alone, to make conclusions about how effective antitrust law is. Firms act in the shadow of the law, and deploy significant legal resources to make sure they avoid activity that would lead to enforcement actions. Thus, any given number of cases brought could be just as consistent with a well-functioning enforcement regime as with an ill-functioning one.

The upshot is that naïvely counting antitrust cases (or the purported lack thereof), with little regard for the behavior that is deterred or the merits of the cases that are dismissed does not tell us whether or not antitrust enforcement levels are optimal.

Further reading:

Law review articles

Issue briefs

Shorter pieces

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

Despite calls from some NGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see? 

But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.

It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.

Interoperability in the DMA

The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:

  1. Mandated interoperability of “ancillary services” (Art 6(1)(f)); 
  2. Real-time data portability (Art 6(1)(h)); and
  3. Business-user access to their own and end-user data (Art 6(1)(i)). 

The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.

However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.

The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.

The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”

Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.

The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.

Why can’t stronger interoperability be safely mandated today?

Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:

1. Data sharing and mining via new APIs;

2. New opportunities for phishing and sock puppetry in a federated ecosystem; and

3. More friction for platforms trying to maintain a secure system.

Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”

There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.

So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.

What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.

The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.