Under a draft “adequacy” decision unveiled today by the European Commission, data-privacy and security commitments made by the United States in an October executive order signed by President Joe Biden were found to comport with the EU’s General Data Protection Regulation (GDPR). If adopted, the decision would provide a legal basis for flows of personal data between the EU and the United States.
This is a welcome development, as some national data-protection authorities in the EU have begun to issue serious threats to stop U.S.-owned data-related service providers from offering services to Europeans. Pending more detailed analysis, I offer some preliminary thoughts here.
Decision Responds to the New U.S. Data-Privacy Framework
The Commission’s decision follows the changes to U.S. policy introduced by Biden’s Oct. 7 executive order. In its July 2020 Schrems II judgment, the EU Court of Justice (CJEU) invalidated the prior adequacy decision on grounds that EU citizens lacked sufficient redress under U.S. law and that U.S. law was not equivalent to “the minimum safeguards” of personal data protection under EU law. The new executive order introduced redress mechanisms that include creating a civil-liberties-protection officer in the Office of the Director of National Intelligence (DNI), as well as a new Data Protection Review Court (DPRC). The DPRC is proposed as an independent review body that will make decisions that are binding on U.S. intelligence agencies.
The old framework had sparked concerns about the independence of the DNI’s ombudsperson, and what was seen as insufficient safeguards against external pressures that individual could face, including the threat of removal. Under the new framework, the independence and binding powers of the DPRC are grounded in regulations issued by the U.S. Attorney General.
To address concerns about the necessity and proportionality of U.S. signals-intelligence activities, the executive order also defines the “legitimate objectives” in pursuit of which such activities can be conducted. These activities would, according to the order, be conducted with the goal of “achieving a proper balance between the importance of the validated intelligence priority being advanced and the impact on the privacy and civil liberties of all persons, regardless of their nationality or wherever they might reside.”
Will the Draft Decision Satisfy the CJEU?
With this draft decision, the European Commission announced it has favorably assessed the executive order’s changes to the U.S. data-protection framework, which apply to foreigners from friendly jurisdictions (presumed to include the EU). If the Commission formally adopts an adequacy decision, however, the decision is certain to be challenged before the CJEU by privacy advocates. In my preliminary analysis after Biden signed the executive order, I summarized some of the concerns raised regarding two aspects relevant to the finding of adequacy: proportionality of data collection and availability of effective redress.
Opponents of granting an adequacy decision tend to rely on an assumption that a finding of adequacy requires virtually identical substantive and procedural privacy safeguards as required within the EU. As noted by the European Commission in the draft decision, this position is not well-supported by CJEU case law, which clearly recognizes that only “adequate level” and “essential equivalence” of protection are required from third-party countries under the GDPR.
To date, the CJEU has not had to specify in greater detail precisely what, in their view, these provisions mean. Instead, the Court has been able simply to point to certain features of U.S. law and practice that were significantly below the GDPR standard (e.g., that the official responsible for providing individual redress was not guaranteed to be independent from political pressure). Future legal challenges to a new Commission adequacy decision will most likely require the CJEU to provide more guidance on what “adequate” and “essentially equivalent” mean.
In the draft decision, the Commission carefully considered the features of U.S. law and practice that the Court previously found inadequate under the GDPR. Nearly half of the explanatory part of the decision is devoted to “access and use of personal data transferred from the [EU] by public authorities in the” United States, with the analysis grounded in CJEU’s Schrems II decision. The Commission concludes that, collectively, all U.S. redress mechanisms available to EU persons:
…allow individuals to have access to their personal data, to have the lawfulness of government access to their data reviewed and, if a violation is found, to have such violation remedied, including through the rectification or erasure of their personal data.
The Commission accepts that individuals have access to their personal data processed by U.S. public authorities, but clarifies that this access may be legitimately limited—e.g., by national-security considerations. Unlike some of the critics of the new executive order, the Commission does not take the simplistic view that access to personal data must be guaranteed by the same procedure that provides binding redress, including the Data Protection Review Court. Instead, the Commission accepts that other avenues, like requests under the Freedom of Information Act, may perform that function.
Overall, the Commission presents a sophisticated, yet uncynical, picture of U.S. law and practice. The lack of cynicism, e.g., about the independence of the DPRC adjudicative process, will undoubtedly be seen by some as naïve and unrealistic, even if the “realism” in this case is based on speculations of what might happen (e.g., secret changes to U.S. policy), rather than evidence. Given the changes adopted by the U.S. government, the key question for the CJEU will be whether to follow the Commission’s approach or that of the activists.
What Happens Next?
The draft adequacy decision will now be scrutinized by EU and national officials. It remains to be seen what will be the collective recommendation of the European Data Protection Board and of the representatives of EU national governments, but there are signs that some domestic data-protection authorities recognize that a finding of adequacy may be appropriate (see, e.g., the opinion from the Hamburg authority).
It is also likely that a significant portion of the European Parliament will be highly critical of the decision, even to the extent of recommending not to adopt it. Importantly, however, none of the consulted bodies have formal power to bind the European Commission on this question. The whole process is expected to take at least several months.
With just a week to go until the U.S. midterm elections, which potentially herald a change in control of one or both houses of Congress, speculation is mounting that congressional Democrats may seek to use the lame-duck session following the election to move one or more pieces of legislation targeting the so-called “Big Tech” companies.
Gaining particular notice—on grounds that it is the least controversial of the measures—is S. 2710, the Open App Markets Act (OAMA). Introduced by Sen. Richard Blumenthal (D-Conn.), the Senate bill has garnered 14 cosponsors: exactly seven Republicans and seven Democrats. It would, among other things, force certain mobile app stores and operating systems to allow “sideloading” and open their platforms to rival in-app payment systems.
Unfortunately, even this relatively restrained legislation—at least, when compared to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act or the European Union’s Digital Markets Act (DMA)—is highly problematic in its own right. Here, I will offer seven major questions the legislation leaves unresolved.
1. Are Quantitative Thresholds a Good Indicator of ‘Gatekeeper Power’?
It is no secret that OAMA has been tailor-made to regulate two specific app stores: Android’s Google Play Store and Apple’s Apple App Store (see here, here, and, yes, even Wikipedia knows it).The text makes this clear by limiting the bill’s scope to app stores with more than 50 million users, a threshold that only Google Play and the Apple App Store currently satisfy.
However, purely quantitative thresholds are a poor indicator of a company’s potential “gatekeeper power.” An app store might have much fewer than 50 million users but cater to a relevant niche market. By the bill’s own logic, why shouldn’t that app store likewise be compelled to be open to competing app distributors? Conversely, it may be easy for users of very large app stores to multi-home or switch seamlessly to competing stores. In either case, raw user data paints a distorted picture of the market’s realities.
As it stands, the bill’s thresholds appear arbitrary and pre-committed to “disciplining” just two companies: Google and Apple. In principle, good laws should be abstract and general and not intentionally crafted to apply only to a few select actors. In OAMA’s case, the law’s specific thresholds are also factually misguided, as purely quantitative criteria are not a good proxy for the sort of market power the bill purportedly seeks to curtail.
2. Why Does the Bill not Apply to all App Stores?
Rather than applying to app stores across the board, OAMA targets only those associated with mobile devices and “general purpose computing devices.” It’s not clear why.
For example, why doesn’t it cover app stores on gaming platforms, such as Microsoft’s Xbox or Sony’s PlayStation?
Currently, a PlayStation user can only buy digital games through the PlayStation Store, where Sony reportedly takes a 30% cut of all sales—although its pricing schedule is less transparent than that of mobile rivals such as Apple or Google.
Clearly, this bothers some developers. Much like Epic Games CEO Tim Sweeney’s ongoing crusade against the Apple App Store, indie-game publisher Iain Garner of Neon Doctrine recently took to Twitter to complain about Sony’s restrictive practices. According to Garner, “Platform X” (clearly PlayStation) charges developers up to $25,000 and 30% of subsequent earnings to give games a modicum of visibility on the platform, in addition to requiring them to jump through such hoops as making a PlayStation-specific trailer and writing a blog post. Garner further alleges that Sony severely circumscribes developers’ ability to offer discounts, “meaning that Platform X owners will always get the worst deal!” (see also here).
Microsoft’s Xbox Game Store similarly takes a 30% cut of sales. Presumably, Microsoft and Sony both have the same type of gatekeeper power in the gaming-console market that Apple and Google are said to have on their respective platforms, leading to precisely those issues that OAMA ostensibly purports to combat. Namely, that consumers are not allowed to choose alternative app stores through which to buy games on their respective consoles, and developers must acquiesce to Sony’s and Microsoft’s terms if they want their games to reach those players.
More broadly, dozens of online platforms also charge commissions on the sales made by their creators. To cite but a few: OnlyFans takes a 20% cut of sales; Facebook gets 30% of the revenue that creators earn from their followers; YouTube takes 45% of ad revenue generated by users; and Twitch reportedly rakes in 50% of subscription fees.
This is not to say that all these services are monopolies that should be regulated. To the contrary, it seems like fees in the 20-30% range are common even in highly competitive environments. Rather, it is merely to observe that there are dozens of online platforms that demand a percentage of the revenue that creators generate and that prevent those creators from bypassing the platform. As well they should, after all, because creating and improving a platform is not free.
It is nonetheless difficult to see why legislation regulating online marketplaces should focus solely on two mobile app stores. Ultimately, the inability of OAMA’s sponsors to properly account for this carveout diminishes the law’s credibility.
3. Should Picking Among Legitimate Business Models Be up to Lawmakers or Consumers?
“Open” and “closed” platforms posit two different business models, each with its own advantages and disadvantages. Some consumers may prefer more open platforms because they grant them more flexibility to customize their mobile devices and operating systems. But there are also compelling reasons to prefer closed systems. As Sam Bowman observed, narrowing choice through a more curated system frees users from having to research every possible option every time they buy or use some product. Instead, they can defer to the platform’s expertise in determining whether an app or app store is trustworthy or whether it contains, say, objectionable content.
Currently, users can choose to opt for Apple’s semi-closed “walled garden” iOS or Google’s relatively more open Android OS (which OAMA wants to pry open even further). Ironically, under the pretext of giving users more “choice,” OAMA would take away the possibility of choice where it matters the most—i.e., at the platform level. As Mikolaj Barczentewicz has written:
A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS).
This obviates the nuances between the two and pushes Android and iOS to converge around a single model. But if consumers unequivocally preferred open platforms, Apple would have no customers, because everyone would already be on Android.
Contrary to regulators’ simplistic assumptions, “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play between these two perfectly valid business models that belie simplistic characterizations of one being inherently superior to the other.
It is debatable whether courts, regulators, or legislators are well-situated to resolve these complex tradeoffs by substituting businesses’ product-design decisions and consumers’ revealed preferences with their own. After all, if regulators had such perfect information, we wouldn’t need markets or competition in the first place.
4. Does OAMA Account for the Security Risks of Sideloading?
Both Apple and Google do this, albeit to varying degrees. For instance, Android already allows sideloading and third-party in-app payment systems to some extent, while Apple runs a tighter ship. However, studies have shown that it is precisely the iOS “walled garden” model which gives it an edge over Android in terms of privacy and security. Even vocal Apple critic Tim Sweeney recently acknowledged that increased safety and privacy were competitive advantages for Apple.
The problem is that far-reaching sideloading mandates—such as the ones contemplated under OAMA—are fundamentally at odds with current privacy and security capabilities (see here and here).
OAMA’s defenders might argue that the law does allow covered platforms to raise safety and security defenses, thus making the tradeoffs between openness and security unnecessary. But the bill places such stringent conditions on those defenses that platform operators will almost certainly be deterred from risking running afoul of the law’s terms. To invoke the safety and security defenses, covered companies must demonstrate that provisions are applied on a “demonstrably consistent basis”; are “narrowly tailored and could not be achieved through less discriminatory means”; and are not used as a “pretext to exclude or impose unnecessary or discriminatory terms.”
Implementing these stringent requirements will drag enforcers into a micromanagement quagmire. There are thousands of potential spyware, malware, rootkit, backdoor, and phishing (to name just a few) software-security issues—all of which pose distinct threats to an operating system. The Federal Trade Commission (FTC) and the federal courts will almost certainly struggle to control the “consistency” requirement across such varied types.
Likewise, OAMA’s reference to “least discriminatory means” suggests there is only one valid answer to any given security-access tradeoff. Further, depending on one’s preferred balance between security and “openness,” a claimed security risk may or may not be “pretextual,” and thus may or may not be legal.
Finally, the bill text appears to preclude the possibility of denying access to a third-party app or app store for reasons other than safety and privacy. This would undermine Apple’s and Google’s two-tiered quality-control systems, which also control for “objectionable” content such as (child) pornography and social engineering.
5. How Will OAMA Safeguard the Rights of Covered Platforms?
OAMA is also deeply flawed from a procedural standpoint. Most importantly, there is no meaningful way to contest the law’s designation as “covered company,” or the harms associated with it.
Once a company is “covered,” it is presumed to hold gatekeeper power, with all the associated risks for competition, innovation, and consumer choice. Remarkably, this presumption does not admit any qualitative or quantitative evidence to the contrary. The only thing a covered company can do to rebut the designation is to demonstrate that it, in fact, has fewer than 50 million users.
By preventing companies from showing that they do not hold the kind of gatekeeper power that harms competition, decreases innovation, raises prices, and reduces choice (the bill’s stated objectives), OAMA severely tilts the playing field in the FTC’s favor. Even the EU’s enforcer-friendly DMA incorporated a last-minute amendment allowing firms to dispute their status as “gatekeepers.” While this defense is not perfect (companies cannot rely on the same qualitative evidence that the European Commission can use against them), at least gatekeeper status can be contested under the DMA.
6. Should Legislation Protect Competitors at the Expense of Consumers?
Like most of the new wave of regulatory initiatives against Big Tech (but unlike antitrust law), OAMA is explicitly designed to help competitors, with consumers footing the bill.
For example, OAMA prohibits covered companies from using or combining nonpublic data obtained from third-party apps or app stores operating on their platforms in competition with those third parties. While this may have the short-term effect of redistributing rents away from these platforms and toward competitors, it risks harming consumers and third-party developers in the long run.
Platforms’ ability to integrate such data is part of what allows them to bring better and improved products and services to consumers in the first place. OAMA tacitly admits this by recognizing that the use of nonpublic data grants covered companies a competitive advantage. In other words, it allows them to deliver a product that is better than competitors’.
Prohibiting self-preferencing raises similar concerns. Why wouldn’t a company that has invested billions in developing a successful platform and ecosystem not give preference to its own products to recoup some of that investment? After all, the possibility of exercising some control over downstream and adjacent products is what might have driven the platform’s development in the first place. In other words, self-preferencing may be a symptom of competition, and not the absence thereof. Third-party companies also would have weaker incentives to develop their own platforms if they can free-ride on the investments of others. And platforms that favor their own downstream products might simply be better positioned to guarantee their quality and reliability (see here and here).
In all of these cases, OAMA’s myopic focus on improving the lot of competitors for easy political points will upend the mobile ecosystems from which both users and developers derive significant benefit.
7. Shouldn’t the EU Bear the Risks of Bad Tech Regulation?
Finally, U.S. lawmakers should ask themselves whether the European Union, which has no tech leaders of its own, is really a model to emulate. Today, after all, marks the day the long-awaited Digital Markets Act— the EU’s response to perceived contestability and fairness problems in the digital economy—officially takes effect. In anticipation of the law entering into force, I summarized some of the outstanding issues that will define implementation moving forward in this recent tweet thread.
We have been critical of the DMA here at Truth on the Market on several factual, legal, economic, and procedural grounds. The law’s problems range from it essentially being a tool to redistribute rents away from platforms and to third-parties, despite it being unclear why the latter group is inherently more deserving (Pablo Ibañez Colomo has raised a similar point); to its opacity and lack of clarity, a process that appears tilted in the Commission’s favor; to the awkward way it interacts with EU competition law, ignoring the welfare tradeoffs between the models it seeks to impose and perfectly valid alternatives (see here and here); to its flawed assumptions (see, e.g., here on contestability under the DMA); to the dubious legal and economic value of the theory of harm known as “self-preferencing”; to the very real possibility of unintended consequences (e.g., in relation to security and interoperability mandates).
In other words, that the United States lags the EU in seeking to regulate this area might not be a bad thing, after all. Despite the EU’s insistence on being a trailblazing agenda-setter at all costs, the wiser thing in tech regulation might be to remain at a safe distance. This is particularly true when one considers the potentially large costs of legislative missteps and the difficulty of recalibrating once a course has been set.
U.S. lawmakers should take advantage of this dynamic and learn from some of the Old Continent’s mistakes. If they play their cards right and take the time to read the writing on the wall, they might just succeed in averting antitrust’s uncertain future.
Faithful and even occasional readers of this roundup might have noticed a certain temporal discontinuity between the last post and this one. The inimitable Gus Hurwitz has passed the scrivener’s pen to me, a recent refugee from the Federal Trade Commission (FTC), and the roundup is back in business. Any errors going forward are mine. Going back, blame Gus.
Commissioner Noah Phillips departed the FTC last Friday, leaving the Commission down a much-needed advocate for consumer welfare and the antitrust laws as they are, if not as some wish they were. I recommend the reflections posted by Commissioner Christine S. Wilson and my fellow former FTC Attorney Advisor Alex Okuliar. Phillips collaborated with his fellow commissioners on matters grounded in the law and evidence, but he wasn’t shy about crying frolic and detour when appropriate.
The FTC without Noah is a lesser place. Still, while it’s not always obvious, many able people remain at the Commission and some good solid work continues. For example, FTC staff filed comments urging New York State to reject a Certificate of Public Advantage (“COPA”) application submitted by SUNY Upstate Health System and Crouse Medical. The staff’s thorough comments reflect investigation of the proposed merger, recent research, and the FTC’s long experience with COPAs. In brief, the staff identified anticompetitive rent-seeking for what it is. Antitrust exemptions for health-care providers tend to make health care worse, but more expensive. Which is a corollary to the evergreen truth that antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.
More Good News from the Commission
On Sept. 30, a unanimous Commission announced that an independent physician association in New Mexico had settled allegations that it violated a 2005 consent order. The allegations? Roughly 400 physicians—independent competitors—had engaged in price fixing, violating both the 2005 order and the Sherman Act. As the concurring statement of Commissioners Phillips and Wilson put it, the new order “will prevent a group of doctors from allegedly getting together to negotiate… higher incomes for themselves and higher costs for their patients.” Oddly, some have chastised the FTC for bringing the action as anti-labor. But the IPA is a regional “must-have” for health plans and a dominant provider to consumers, including patients, who might face tighter budget constraints than the median physician
Now comes October and an amended complaint. The amended complaint is even weaker than the opening salvo. Now, the FTC alleges that the acquisition would eliminate potential competition from Meta in a narrower market, VR-dedicated fitness apps, by “eliminating any probability that Meta would enter the market through alternative means absent the Proposed Acquisition, as well as eliminating the likely and actual beneficial influence on existing competition that results from Meta’s current position, poised on the edge of the market.”
So what if Meta were to abandon the deal—as the FTC wants—but not enter on its own? Same effect, but the FTC cannot seriously suggest that Meta has a positive duty to enter the market. Is there a jurisdiction (or a planet) where a decision to delay or abandon entry would be unlawful unilateral conduct? Suppose instead that Meta enters, with virtual-exercise guns blazing, much to the consternation of firms actually in the market, which might complain about it. Then what? Would the Commission cheer or would it allege harm to nascent competition, or perhaps a novel vertical theory? And by the way, how poised is Meta, given no competing product in late-stage development? Would the FTC prefer that Meta buy a different competitor? Should the overworked staff commence Meta’s due diligence?
Potential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established. But this is a nascent market in a large, highly dynamic, and innovative industry. The competitive landscape a few years down the road is anyone’s guess. More speculation: the staff was right all along. For more, see Dirk Auer’s or Geoffrey Manne’s threads on the amended complaint.
When It Rains It Pours Regulations
On Aug. 22, the FTC published an advance notice of proposed rulemaking (ANPR) to consider the potential regulation of “commercial surveillance and data security” under its Section 18 authority. Shortly thereafter, they announced an Oct. 20 open meeting with three more ANPRs on the agenda.
First, on the advance notice: I’m not sure what they mean by “commercial surveillance.” The term doesn’t appear in statutory law, or in prior FTC enforcement actions. It sounds sinister and, surely, it’s an intentional nod to Shoshana Zuboff’s anti-tech polemic “The Age of Surveillance Capitalism.” One thing is plain enough: the proffered definition is as dramatically sweeping as it is hopelessly vague. The Commission seems to be contemplating a general data regulation of some sort, but we don’t know what sort. They don’t say or even sketch a possible rule. That’s a problem for the FTC, because the law demands that the Commission state its regulatory objectives, along with regulatory alternatives under consideration, in the ANPR itself. If they get to an NPRM, they are required to describe a proposed rule with specificity.
What’s clear is that the ANPR takes a dim view of much of the digital economy. And while the Commission has considerable experience in certain sorts of privacy and data security matters, the ANPR hints at a project extending well past that experience. Commissioners Phillips and Wilson dissented for good and overlapping reasons. Here’s a bit from the Phillips dissent:
When adopting regulations, clarity is a virtue. But the only thing clear in the ANPR is a rather dystopic view of modern commerce….I cannot support an ANPR that is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate….It’s a naked power grab.
Be sure to read the bonus material in the Federal Register—supporting statements from Chair Lina Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya, and dissenting statements from Commissioners Phillips and Wilson. Chair Khan breezily states that “the questions we ask in the ANPR and the rules we are empowered to issue may be consequential, but they do not implicate the ‘major questions doctrine.’” She’s probably half right: the questions do not violate the Constitution. But she’s probably half wrong too.
But wait, there’s more! There were three additional ANPRs on the Commission’s Oct. 20 agenda. So that’s four and counting. Will there be a proposed rule on non-competes? Gig workers? Stay tuned. For now, note that rules are not self-enforcing, and that the chair has testified to Congress that the Commission is strapped for resources and struggling to keep up with its statutory mission. Are more regulations an odd way to ask Congress for money? Thus far, there’s no proposed rule on gig workers, but there was a Policy Statement on Enforcement Related to Gig Workers.. For more on that story, see Alden Abbott’s TOTM post.
Laws, Like People, Have Their Limits
Read Phillips’s parting dissent in Passport Auto Group, where the Commission combined legitimate allegations with an unhealthy dose of overreach:
The language of the unfairness standard has given the FTC the flexibility to combat new threats to consumers that accompany the development of new industries and technologies. Still, there are limits to the Commission’s unfairness authority. Because this complaint includes an unfairness count that aims to transform Section 5 into an undefined discrimination statute, I respectfully dissent.”
Right. Three cheers for effective enforcement of the focused antidiscrimination laws enacted by Congress by the agencies actually charged to enforce those laws. And to equal protection. And three more, at least, for a little regulatory humility, if we find it.
The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.
The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.
In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.
Protectionism Under the Guise of Cybersecurity
Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.
The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.
Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”
The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.
Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.
Do Security and Privacy Require Protectionism?
As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.
In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).
Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.
Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.
The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.
Digital Sovereignty as Industrial Policy
Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”
She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.
Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.
Effect on startups
Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.
In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”
More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.
Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.
For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”
Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:
Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.
More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.
The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.
Direct investment
Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.
The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy.
According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.
It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.
Conclusion
Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.
National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.
Just three weeks after a draft version of the legislation was unveiled by congressional negotiators, the American Data Privacy and Protection Act (ADPPA) is heading to its first legislative markup, set for tomorrow morning before the U.S. House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee.
Though the bill’s legislative future remains uncertain, particularly in the U.S. Senate, it would be appropriate to check how the measure compares with, and could potentially interact with, the comprehensive data-privacy regime promulgated by the European Union’s General Data Protection Regulation (GDPR). A preliminary comparison of the two shows that the ADPPA risks adopting some of the GDPR’s flaws, while adding some entirely new problems.
American ‘cookie law’: welcome to consent hell
A common misconception about the GDPR is that it imposed a requirement for “cookie consent” pop-ups that mar the experience of European users of the Internet. In fact, this requirement comes from a different and much older piece of EU law, the 2002 ePrivacy Directive. In most circumstances, the GDPR itself does not require express consent for cookies or other common and beneficial mechanisms to keep track of user interactions with a website. Website publishers could likely rely on one of two lawful bases for data processing outlined in Article 6 of the GDPR:
data processing is necessary in connection with a contractual relationship with the user, or
“processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (unless overridden by interests of the data subject).
For its part, the ADPPA generally adopts the “contractual necessity” basis for data processing but excludes the option to collect or process “information identifying an individual’s online activities over time or across third party websites.” The ADPPA instead classifies such information as “sensitive covered data.” It’s difficult to see what benefit users would derive from having to click that they “consent” to features that are clearly necessary for the most basic functionality, such as remaining logged in to a site or adding items to an online shopping cart. But the expected result will be many, many more popup consent queries, like those that already bedevil European users.
Using personal data to create new products
Section 101(a)(1) of the ADPPA expressly allows the use of “covered data” (personal data) to “provide or maintain a specific product or service requested by an individual.” But the legislation is murkier when it comes to the permissible uses of covered data to develop new products. This would only clearly be allowed where each data subject concerned could be asked if they “request” the specific future product. By contrast, under the GDPR, it is clear that a firm can ask for user consent to use their data to develop future products.
Moving beyond Section 101, we can look to the “general exceptions” in Section 209 of the ADPPA, specifically the exception in Section 209(a)(2)):
With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to perform system maintenance, diagnostics, maintain a product or service for which such covered data was collected, conduct internal research or analytics to improve products and services, perform inventory management or network management, or debug or repair errors that impair the functionality of a service or product for which such covered data was collected by the covered entity, except such data shall not be transferred.
While this provision mentions conducting “internal research or analytics to improve products and services,” it also refers to “a product or service for which such covered data was collected.” The concern here is that this could be interpreted as only allowing “research or analytics” in relation to existing products known to the data subject.
The road ends here for personal data that the firm collects itself. Somewhat paradoxically, the firm could more easily make the case for using data obtained from a third party. Under Section 302(b) of the ADPPA, a firm only has to ensure that it is not processing “third party data for a processing purpose inconsistent with the expectations of a reasonable individual.” Such a relatively broad “reasonable expectations” basis is not available for data collected directly by first-party covered entities.
Under the GDPR, aside from the data subject’s consent, the firm also could rely on its own “legitimate interest” as a lawful basis to process user data to develop new products. It is true, however, that due to requirements that the interests of the data controller and the data subject must be appropriately weighed, the “legitimate interest” basis is probably less popular in the EU than alternatives like consent or contractual necessity.
Developing this path in the ADPPA would arguably provide a more sensible basis for data uses like the reuse of data for new product development. This could be superior even to express consent, which faces problems like “consent fatigue.” These are unlikely to be solved by promulgating detailed rules on “affirmative consent,” as proposed in Section 2 of the ADPPA.
Problems with ‘de-identified data’
Another example of significant confusion in the ADPPA’s the basic conceptual scheme is the bill’s notion of “de-identified data.” The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.
The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. The definition covers: “information that does not identify and is not linked or reasonably linkable to an individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.
For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:
derived from identifiable data, but
that hold a possibility of re-identification (weaker than “reasonably linkable”) and
are processed by the entity that previously processed the original identifiable data.
Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to non-personal data that would otherwise not be covered by the ADPPA.
The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.
The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?
The ADPPA imposes several duties on entities dealing with “de-identified data” (that is, all data that are not considered “covered” data):
to take “reasonable measures to ensure that the information cannot, at any point, be used to re-identify any individual or device”;
to publicly commit “in a clear and conspicuous manner—
to process and transfer the information solely in a de- identified form without any reasonable means for re- identification; and
to not attempt to re-identify the information with any individual or device;”
to “contractually obligate[] any person or entity that receives the information from the covered entity to comply with all of the” same rules.
The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.
The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from effective attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here but ended up prohibiting a vast sphere of innocuous activity.
Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”
Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification. Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is effectively to impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.
Conclusion
The basic conceptual structure of the legislation that subcommittee members will take up this week is, to a very significant extent, both confused and confusing. Perhaps in tomorrow’s markup, a more open and detailed discussion of what the drafters were trying to achieve could help to improve the scheme, as it seems that some key provisions of the current draft would lead to absurd results (e.g., those directly contrary to the principle of data minimization).
Given that the GDPR is already a well-known point of reference, including for U.S.-based companies and privacy professionals, the ADPPA may do better to re-use the best features of the GDPR’s conceptual structure while cutting its excesses. Re-inventing the wheel by proposing new concepts did not work well in this ADPPA draft.
After years of debate and negotiations, European Lawmakers have agreed upon what will most likely be the final iteration of the Digital Markets Act (“DMA”), following the March 24 final round of “trilogue” talks.
For the uninitiated, the DMA is one in a string of legislative proposals around the globe intended to “rein in” tech companies like Google, Amazon, Facebook, and Apple through mandated interoperability requirements and other regulatory tools, such as bans on self-preferencing. Other important bills from across the pond include the American Innovation and Choice Online Act, the ACCESS Act, and the Open App Markets Act.
In many ways, the final version of the DMA represents the worst possible outcome, given the items that were still up for debate. The Commission caved to some of the Parliament’s more excessive demands—such as sweeping interoperability provisions that would extend not only to “ancillary” services, such as payments, but also to messaging services’ basic functionalities. Other important developments include the addition of voice assistants and web browsers to the list of Core Platform Services (“CPS”), and symbolically higher “designation” thresholds that further ensure the act will apply overwhelmingly to just U.S. companies. On a brighter note, lawmakers agreed that companies could rebut their designation as “gatekeepers,” though it remains to be seen how feasible that will be in practice.
We offer here an overview of the key provisions included in the final version of the DMA and a reminder of the shaky foundations it rests on.
Interoperability
Among the most important of the DMA’s new rules concerns mandatory interoperability among online platforms. In a nutshell, digital platforms that are designated as “gatekeepers” will be forced to make their services “interoperable” (i.e., compatible) with those of rivals. It is argued that this will make online markets more contestable and thus boost consumer choice. But as ICLE scholars have been explaining for some time, this is unlikely to be the case (here, here, and here). Interoperability is not the panacea EU legislators claim it to be. As former ICLE Director of Competition Policy Sam Bowman has written, there are many things that could be interoperable, but aren’t. The reason is that interoperability comes with costs as well as benefits. For instance, it may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to the market and for consumers to be able to choose among them. Economists Michael L. Katz and Carl Shapiro concur:
Although compatibility has obvious benefits, obtaining and maintaining compatibility often involves a sacrifice in terms of product variety or restraints on innovation.
There are other potential downsides to interoperability. For instance, a given set of interoperable standards might be too costly to implement and/or maintain; it might preclude certain pricing models that increase output; or it might compromise some element of a product or service that offers benefits specifically because it is not interoperable (such as, e.g., security features). Consumers may also genuinely prefer closed (i.e., non-interoperable) platforms. Indeed: “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play that belie simplistic characterizations of one being inherently superior to the other.
Further, as Sam Bowman observed, narrowing choice through a more curated experience can also be valuable for users, as it frees them from having to research every possible option every time they buy or use some product (if you’re unconvinced, try turning off your spam filter for a couple of days). Instead, the relevant choice consumers exercise might be in choosing among brands. In sum, where interoperability is a desirable feature, consumer preferences will tend to push for more of it. However, it is fundamentally misguided to treat mandatory interoperability as a cure-all elixir or a “super tool” of “digital platform governance.” In a free-market economy, it is not—or, it should not—be up to courts and legislators to substitute for businesses’ product-design decisions and consumers’ revealed preferences with their own, based on diffuse notions of “fairness.” After all, if we could entrust such decisions to regulators, we wouldn’t need markets or competition in the first place.
Of course, it was always clear that the DMA would contemplate some degree of mandatory interoperability – indeed, this was arguably the new law’s biggest selling point. What was up in the air until now was the scope of such obligations. The Commission had initially pushed for a comparatively restrained approach, requiring interoperability “only” in ancillary services, such as payment systems (“vertical interoperability”). By contrast, the European Parliament called for more expansive requirements that would also encompass social-media platforms and other messaging services (“horizontal interoperability”).
The problem with such far-reaching interoperability requirements is that they are fundamentally out of pace with current privacy and security capabilities. As ICLE Senior Scholar Mikolaj Barczentewicz has repeatedly argued, the Parliament’s insistence on going significantly beyond the original DMA’s proposal and mandating interoperability of messaging services is overly broad and irresponsible. Indeed, as Mikolaj notes, the “likely result is less security and privacy, more expenses, and less innovation.”The DMA’s defensers would retort that the law allows gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament) to protect safety and privacy (it is not yet clear which wording the final version has adopted). Either way, however, the standard may be too high and companies may very well offer lower security to avoid liability for adopting measures that would be judged by the Commission and the courts as going beyond what is “strictly necessary” or “indispensable.” These safeguards will inevitably be all the more indeterminate (and thus ineffectual) if weighed against other vague concepts at the heart of the DMA, such as “fairness.”
Gatekeeper Thresholds and the Designation Process
Another important issue in the DMA’s construction concerns the designation of what the law deems “gatekeepers.” Indeed, the DMA will only apply to such market gatekeepers—so-designated because they meet certain requirements and thresholds. Unfortunately, the factors that the European Commission will consider in conducting this designation process—revenues, market capitalization, and user base—are poor proxies for firms’ actual competitive position. This is not surprising, however, as the procedure is mainly designed to ensure certain high-profile (and overwhelmingly American) platforms are caught by the DMA.
From this perspective, the last-minute increase in revenue and market-capitalization thresholds—from 6.5 billion euros to 7.5 billion euros, and from 65 billion euros to 75 billion euros, respectively—won’t change the scope of the companies covered by the DMA very much. But it will serve to confirm what we already suspected: that the DMA’s thresholds are mostly tailored to catch certain U.S. companies, deliberately leaving out EU and possibly Chinese competitors (see here and here). Indeed, what would have made a difference here would have been lowering the thresholds, but this was never really on the table. Ultimately, tilting the European Union’s playing field against its top trading partner, in terms of exports and trade balance, is economically, politically, and strategically unwise.
As a consolation of sorts, it seems that the Commission managed to squeeze in a rebuttal mechanism for designated gatekeepers. Imposing far-reaching obligations on companies with no (or very limited) recourse to escape the onerous requirements of the DMA would be contrary to the basic principles of procedural fairness. Still, it remains to be seen how this mechanism will be articulated and whether it will actually be viable in practice.
Double (and Triple?) Jeopardy
Two recent judgments from the European Court of Justice (ECJ)—Nordzucker and bpost—are likely to underscore the unintended effects of cumulative application of both the DMA and EU and/or national competition laws. The bpost decision is particularly relevant, because it lays down the conditions under which cases that evaluate the same persons and the same facts in two separate fields of law (sectoral regulation and competition law) do not violate the principle of ne bis in idem, also known as “double jeopardy.” As paragraph 51 of the judgment establishes:
There must be precise rules to determine which acts or omissions are liable to be subject to duplicate proceedings;
The two sets of proceedings must have been conducted in a sufficiently coordinated manner and within a similar timeframe; and
The overall penalties must match the seriousness of the offense.
It is doubtful whether the DMA fulfills these conditions. This is especially unfortunate considering the overlapping rules, features, and goals among the DMA and national-level competition laws, which are bound to lead to parallel procedures. In a word: expect double and triple jeopardy to be hotly litigated in the aftermath of the DMA.
Of course, other relevant questions have been settled which, for reasons of scope, we will have to leave for another time. These include the level of fines(up to 10% worldwide revenue, or 20% in the case of repeat offenses); the definition and consequences of systemic noncompliance (it seems that the Parliament’s draconian push for a general ban on acquisitions in case of systemic noncompliance has been dropped); and the addition of more core platform services (web browsers and voice assistants).
The DMA’s Dubious Underlying Assumptions
The fuss and exhilaration surrounding the impending adoption of the EU’s most ambitious competition-related proposal in decades should not obscure some of the more dubious assumptions which underpin it, such as that:
It is still unclear that intervention in digital markets is necessary, let alone urgent.
Even if it were clear, there is scant evidence to suggest that tried and tested ex post instruments, such as those envisioned in EU competition law, are not up to the task.
Even if the prior two points had been established beyond any reasonable doubt (which they haven’t), it is still far from clear that DMA-style ex ante regulation is the right tool to address potential harms to competition and to consumers that arise in digital markets.
It is unclear that intervention is necessary
Despite a mounting moral panic around and zealous political crusading against Big Tech (an epithet meant to conjure antipathy and distrust), it is still unclear that intervention in digital markets is necessary. Much of the behavior the DMA assumes to be anti-competitive has plausible pro-competitive justifications. Self-preferencing, for instance, is a normal part of how platforms operate, both to improve the value of their core products and to earn returns to reinvest in their development. As ICLE’s Dirk Auer points out, since platforms’ incentives are to maximize the value of their entire product ecosystem, those that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product (the example of Facebook’s integration of Instagram is a case in point). Thus, while self-preferencing may, in some cases,be harmful, a blanket presumption of harm is thoroughly unwarranted.
Similarly, the argument that switching costs and data-related increasing returns to scale (in fact, data generally entails diminishing returns) have led to consumer lock-in and thereby raised entry barriers has also been exaggerated to epic proportions (pun intended). As we have discussed previously, there are plenty of counterexamples where firms have easily overcome seemingly “insurmountable” barriers to entry, switching costs, and network effects to disrupt incumbents.
To pick a recent case: how many of us had heard of Zoom before the pandemic? Where was TikTok three years ago? (see here for a multitude of other classic examples, including Yahoo and Myspace).
Can you really say, with a straight face, that switching costs between messaging apps are prohibitive? I’m not even that active and I use at least six such apps on a daily basis: Facebook Messenger, Whatsapp, Instagram, Twitter, Viber, Telegram, and Slack (it took me all of three minutes to download and start using Slack—my newest addition). In fact, chances are that, like me, you have always multihomed nonchalantly and had never even considered that switching costs were impossibly high (or that they were a thing) until the idea that you were “locked-in” by Big Tech was drilled into your head by politicians and other busybodies looking for trophies to adorn their walls.
What about the “unprecedented,” quasi-fascistic levels of economic concentration? First, measures of market concentration are sometimes anchored in flawed methodology and market definitions (see, e.g., Epic’s insistence that Apple is a monopolist in the market for operating systems, conveniently ignoring that competition occurs at the smartphone level, where Apple has a worldwide market share of 15%—see pages 45-46 here). But even if such measurements were accurate, high levels of concentration don’t necessarily mean that firms do not face strong competition. In fact, as Nicolas Petit has shown, tech companies compete vigorously against each other across markets.
But perhaps the DMA’s raison d’etre rests less on market failure, but rather on a legal or enforcement failure? This, too, is misguided.
EU competition law is already up to the task
As Giuseppe Colangelo has argued persuasively (here and here), it is not at all clear that ex post competition regulation is insufficient to tackle anti-competitive behavior in the digital sector:
Ongoing antitrust investigations demonstrate that standard competition law still provides a flexible framework to scrutinize several practices described as new and peculiar to app stores.
The recent Google Shopping decision, in which the Commission found that Google had abused its dominant position by preferencing its own online-shopping service in Google Search results, is a case in point (the decision was confirmed by the General Court and is now pending review before the European Court of Justice). The “self-preferencing” category has since been applied by other EU competition authorities. The Italian competition authority, for instance, fined Amazon 1 billion euros for preferencing its own distribution service, Fulfilled by Amazon, on the Amazon marketplace (i.e., Amazon.it). Thus, Article 102, which includes prohibitions on “applying dissimilar conditions to similar transactions,” appears sufficiently flexible to cover self-preferencing, as well as other potentially anti-competitive offenses relevant to digital markets (e.g., essential facilities).
For better or for worse, EU competition law has historically been sufficiently pliable to serve a range of goals and values. It has also allowed for experimentation and incorporated novel theories of harm and economic insights. Here, the advantage of competition law is that it allows for a more refined, individualized approach that can avoid some of the pitfalls of applying a one-size fits all model across all digital platforms. Those pitfalls include: harming consumers, jeopardizing the business models of some of the most successful and pro-consumer companies in existence, and ignoring the differences among platforms, such as between Google and Apple’s app stores. I turn to these issues next.
Ex ante regulation probably isn’t the right tool
Even if it were clear that intervention is necessary and that existing competition law was insufficient, it is not clear that the DMA is the right regulatory tool to address any potential harms to competition and consumers that may arise in the digital markets. Here, legislators need to be wary of unintended consequences, trade-offs, and regulatory fallibility. For one, It is possible that the DMA will essentially consolidate the power of tech platforms, turning them into de facto public utilities. This will not foster competition, but rather will make smaller competitors systematically dependent on so-called gatekeepers. Indeed, why become the next Google if you can just free ride off of the current Google? Why download an emerging messaging app if you can already interact with its users through your current one? In a way, then, the DMA may become a self-fulfilling prophecy.
Moreover, turning closed or semi-closed platforms such as the iOS into open platforms more akin to Android blurs the distinctions among products and dampens interbrand competition. It is a supreme paradox that interoperability and sideloading requirements purportedly give users more choice by taking away the option of choosing a “walled garden” model. As discussed above, overriding the revealed preferences of millions of users is neither pro-competitive nor pro-consumer (but it probably favors some competitors at the expense of those two things).
Nor are many of the other obligations contemplated in the DMA necessarily beneficial to consumers. Do users really not want to have default apps come preloaded on their devices and instead have to download and install them manually? Ditto for operating systems. What is the point of an operating system if it doesn’t come with certain functionalities, such as a web browser? What else should we unbundle—keyboard on iOS? Flashlight? Do consumers really want to choose from dozens of app stores when turning on their new phone for the first time? Do they really want to have their devices cluttered with pointless split-screens? Do users really want to find all their contacts (and be found by all their contacts) across all messaging services? (I switched to Viber because I emphatically didn’t.) Do they really want to have their privacy and security compromised because of interoperability requirements?Then there is the question of regulatory fallibility. As Alden Abott has written on the DMA and other ex ante regulatory proposals aimed at “reining in” tech companies:
Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example).
This brings us back to the second point: without evidence that antitrust law is “not up to the task,” far-reaching and untested regulatory initiatives with potentially high error costs are put forth as superior to long-established, consumer-based antitrust enforcement. Yes, antitrust may have downsides (e.g., relative indeterminacy and slowness), but these pale in comparison to the DMA’s (e.g., large error costs resulting from high information requirements, rent-seeking, agency capture).
Conclusion
The DMA is an ambitious piece of regulation purportedly aimed at ensuring “fair and open digital markets.” This implies that markets are not fair and open; or that they risk becoming unfair and closed absent far-reaching regulatory intervention at EU level. However, it is unclear to what extent such assumptions are borne out by the reality of markets. Are digital markets really closed? Are they really unfair? If so, is it really certain that regulation is necessary? Has antitrust truly proven insufficient? It also implies that DMA-style ex ante regulation is necessary to tackle it, and that the costs won’t outweigh the benefits. These are heroic assumptions that have never truly been seriously put to the test.
Considering such brittle empirical foundations, the DMA was always going to be a contentious piece of legislation. However, there was always the hope that EU legislators would show restraint in the face of little empirical evidence and high error costs. Today, these hopes have been dashed. With the adoption of the DMA, the Commission, Council, and the Parliament have arguably taken a bad piece of legislation and made it worse. The interoperability requirements in messaging services, which are bound to be a bane for user privacy and security, are a case in point.
After years trying to anticipate the whims of EU legislators, we finally know where we’re going, but it’s still not entirely sure why we’re going there.
There has been a wave of legislative proposals on both sides of the Atlantic that purport to improve consumer choice and the competitiveness of digital markets. In a new working paper published by the Stanford-Vienna Transatlantic Technology Law Forum, I analyzed five such bills: the EU Digital Services Act, the EU Digital Markets Act, and U.S. bills sponsored by Rep. David Cicilline (D-R.I.), Rep. Mary Gay Scanlon (D-Pa.), Sen. Amy Klobuchar (D-Minn.) and Sen. Richard Blumenthal (D-Conn.). I concluded that all those bills would have negative and unaddressed consequences in terms of information privacy and security.
In this post, I present the main points from the working paper regarding two regulatory solutions: (1) mandating interoperability and (2) mandating device neutrality (which leads to a possibility of sideloading applications, a special case of interoperability.) The full working paper also covers the risks of compulsory data access (by vetted researchers or by authorities).
Interoperability
Interoperability is increasingly presented as a potential solution to some of the alleged problems associated with digital services and with large online platforms, in particular (see, e.g.,here andhere). For example, interoperability might allow third-party developers to offer different “flavors” of social-media newsfeeds, with varying approaches to content ranking and moderation. This way, it might matter less than it does now what content moderation decisions Facebook or other platforms make. Facebook users could choose alternative content moderators, delivering the kind of news feed that those users expect.
The concept of interoperability is popular not only among thought leaders, but also among legislators. The DMA, as well as the U.S. bills by Rep. Scanlon, Rep. Cicilline, and Sen. Klobuchar, all include interoperability mandates.
At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. It is telling that supporters of interoperability mandates use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns, due to its prioritization of interoperability (see, e.g.,here).
Using currently available technology to provide alternative interfaces or moderation services for social-media platforms, third-party developers would have to be able to access much of the platform content that is potentially available to a user. This would include not just content produced by users who explicitly agree to share their data with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It does not require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.
There are several constraints for interoperability frameworks that must be in place to safeguard privacy and security effectively.
First, solutions should be targeted toward real users of digital services, without assuming away some common but inconvenient characteristics. In particular, solutions should not assume unrealistic levels of user interest and technical acumen.
Second, solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.
If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.
It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.
Art 6(1) of the original DMA proposal included some general interoperability provisions applicable to “gatekeepers”—i.e., the largest online platforms. Those interoperability mandates were somewhat limited – applying only to “ancillary services” (e.g., payment or identification services) or requiring only one-way data portability. However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately.
The drafts of the DMA adopted by the European Council and by the European Parliament attempt to address that, but they only allow gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament). This standard may be too high and could push gatekeepers to offer lower security to avoid liability for adopting measures that would be judged by EU institutions and the courts as going beyond what is strictly necessary or indispensable.
The more recent DMA proposal from the European Parliament goes significantly beyond the original proposal, mandating full interoperability of a number of “independent interpersonal communication services” and of social-networking services. The Parliament’s proposals are good examples of overly broad and irresponsible interoperability mandates. They would cover “any providers” wanting to interconnect with gatekeepers, without adequate vetting. The safeguard proviso mentioning “high level of security and personal data protection” does not come close to addressing the seriousness of the risks created by the mandate. Instead of facing up to the risks and ensuring that the mandate itself be limited in ways that minimize them, the proposal seems just to expect that the gatekeepers can solve the problems if they only “nerd harder.”
All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:
All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:
restrict or impede the capacity of a business user to access or interoperate with the same platform, operating system, hardware and software features that are available to the covered platform operator’s own products, services, or lines of business.
The language of the prohibition in Sen. Klobuchar’s American Innovation and Choice Online Act (AICOA) is similar (also in Section 2(b)(1)). Both ACIOA and AICOA allow for affirmative defenses that a service provider could use if sued under the statute. While those defenses mention privacy and security, they are narrow (“narrowly tailored, could not be achieved through a less discriminatory means, was nonpretextual, and was necessary”) and would not prevent service providers from incurring significant litigation costs. Hence, just like the provisions of the DMA, they would heavily incentivize covered service providers not to adopt the most effective protections of privacy and security.
Device Neutrality (Sideloading)
Article 6(1)(c) of the DMA contains specific provisions about “sideloading”—i.e., allowing installation of third-party software through alternative app stores other than the one provided by the manufacturer (e.g., Apple’s App Store for iOS devices). A similar express provision for sideloading is included in Sen. Blumenthal’s Open App Markets Act (Section 3(d)(2)). Moreover, the broad interoperability provisions in the other U.S. bills discussed above may also be interpreted to require permitting sideloading.
A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS). By taking away the choice of a walled garden environment, a sideloading mandate will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny.
This is not to say that Apple’s app scrutiny is perfect, but it is reasonable for an ordinary user to prefer Apple’s approach because it provides greater security (see, e.g.,here andhere). Thus, a legislative choice to override the revealed preference of millions of users for a “walled garden” approach should not be made lightly.
Privacy and security safeguards in the DMA’s sideloading provisions, as amended by the European Council and by the European Parliament, as well as in Sen. Blumenthal’s Open App Markets Act, share the same problem of narrowness as the safeguards discussed above.
There is a more general privacy and security issue here, however, that those safeguards cannot address. The proposed sideloading mandate would prohibit outright a privacy and security-protection model that many users rationally choose today. Even with broader exemptions, this loss will be genuine. It is unclear whether taking away this choice from users is justified.
Conclusion
All the U.S. and EU legislative proposals considered here betray a policy preference of privileging uncertain and speculative competition gains at the expense of introducing a new and clear danger to information privacy and security. The proponents of these (or even stronger) legislative interventions seem much more concerned, for example, that privacy safeguards are “not abused by Apple and Google to protect their respective app store monopoly in the guise of user security” (source).
Given the problems with ensuring effective enforcement of privacy protections (especially with respect to actors coming from outside the EU, the United States, and other broadly privacy-respecting jurisdictions), the lip service paid by the legislative proposals to privacy and security is not much more than that. Policymakers should be expected to offer a much more detailed vision of concrete safeguards and mechanisms of enforcement when proposing rules that come with significant and entirely predictable privacy and security risks. Such vision is lacking on both sides of the Atlantic.
I do not want to suggest that interoperability is undesirable. The argument of this paper was focused on legallymandated interoperability. Firms experiment with interoperability all the time—the prevalence of open APIs on the Internet is testament to this. My aim, however, is to highlight that interoperability is complex and exposes firms and their users to potentially large-scale cyber vulnerabilities.
Generalized obligations on firms to open their data, or to create service interoperability, can short-circuit the private ordering processes that seek out those forms of interoperability and sharing that pass a cost-benefit test. The result will likely be both overinclusive and underinclusive. It would be overinclusive to require all firms in the regulated class to broadly open their services and data to all interested parties, even where it wouldn’t make sense for privacy, security, or other efficiency reasons. It is underinclusive in that the broad mandate will necessarily sap regulated firms’ resources and deter them from looking for new innovative uses that might make sense, but that are outside of the broad mandate. Thus, the likely result is less security and privacy, more expense, and less innovation.
The Senate Judiciary Committee is set to debate S. 2992, the American Innovation and Choice Online Act (or AICOA) during a markup session Thursday. If passed into law, the bill would force online platforms to treat rivals’ services as they would their own, while ensuring their platforms interoperate seamlessly.
The bill marks the culmination of misguided efforts to bring Big Tech to heel, regardless of the negative costs imposed upon consumers in the process. ICLE scholars have written about these developments in detail since the bill was introduced in October.
Below are 10 significant misconceptions that underpin the legislation.
1. There Is No Evidence that Self-Preferencing Is Generally Harmful
Self-preferencing is a normal part of how platforms operate, both to improve the value of their core products and to earn returns so that they have reason to continue investing in their development.
Platforms’ incentives are to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals.
As Geoff Manne concludes, the notion that it is harmful (notably to innovation) when platforms enter into competition with edge providers is entirely speculative. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.
Consider a few examples from the empirical literature:
Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook.
Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally.
Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games increase the potential for all independent game developers to profit from their games, even in the face of competition from first-party games.
Finally, while Zhu and Liu (2018) is often held up as demonstrating harm from Amazon’s competition with third-party sellers on its platform, its findings are actually far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third‐ party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear.”
2. Interoperability Is Not Costless
There are many things that could be interoperable, but aren’t. The reason not everything is interoperable is because interoperability comes with costs, as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds.
As Sam Bowman has observed, there are often costs that prevent interoperability from being worth the tradeoff, such as that:
It might be too costly to implement and/or maintain.
It might prescribe a certain product design and prevent experimentation and innovation.
It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
It might increase the risk of something not working, or of security breaches.
It might prevent certain pricing models that increase output.
It might compromise some element of the product or service that benefits specifically from not being interoperable.
In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen: consumers will choose products that are not interoperable.
In short, we cannot infer from the mere absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.
3. Consumers Often Prefer Closed Ecosystems
Digital markets could have taken a vast number of shapes. So why have they gravitated toward the very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones?
Indeed, if recent commentary is to be believed, it is the latter that should succeed, because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into that breach. But this does not seem to be happening in the digital economy.
The naïve answer is to say that the absence of “open” systems is precisely the problem. What’s harder is to try to actually understand why. As I have written, there are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.
Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and on consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.
It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision.
They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.
Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. What some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said it best when he quipped that economists always find a monopoly explanation for things that they simply fail to understand.
4. Data Portability Can Undermine Security and Privacy
As explained above, platforms that are more tightly controlled can be regulated by the platform owner to avoid some of the risks present in more open platforms. Apple’s App Store, for example, is a relatively closed and curated platform, which gives users assurance that apps will meet a certain standard of security and trustworthiness.
Along similar lines, there are privacy issues that arise from data portability. Even a relatively simple requirement to make photos available for download can implicate third-party interests. Making a user’s photos more broadly available may tread upon the privacy interests of friends whose faces appear in those photos. Importing those photos to a new service potentially subjects those individuals to increased and un-bargained-for security risks.
As Sam Bowman and Geoff Manne observe, this is exactly what happened with Facebook and its Social Graph API v1.0, ultimately culminating in the Cambridge Analytica scandal. Because v1.0 of Facebook’s Social Graph API permitted developers to access information about a user’s friends without consent, it enabled third-party access to data about exponentially more users. It appears that some 270,000 users granted data access to Cambridge Analytica, from which the company was able to obtain information on 50 million Facebook users.
In short, there is often no simple solution to implement interoperability and data portability. Any such program—whether legally mandated or voluntarily adopted—will need to grapple with these and other tradeoffs.
5. Network Effects Are Rarely Insurmountable
Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (see here, here, and here). But there are countless counterexamples where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I wrote in April 2019 (a year before the COVID-19 pandemic):
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Geoff Manne and Alec Stapp have put forward a multitude of other examples, including: the demise of Yahoo; the disruption of early instant-messaging applications and websites; and MySpace’s rapid decline. In all of these cases, outcomes did not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet, this question is systematically omitted from most policy discussions.
6. Profits Facilitate New and Exciting Platforms
As I wrote in August 2020, the relatively closed model employed by several successful platforms (notably Apple’s App Store, Google’s Play Store, and the Amazon Retail Platform) allows previously unknown developers/retailers to rapidly expand because (i) users do not have to fear their apps contain some form of malware and (ii) they greatly reduce payments frictions, most notably security-related ones.
While these are, indeed, tremendous benefits, another important upside seems to have gone relatively unnoticed. The “closed” business model also gives firms significant incentives to develop new distribution mediums (smart TVs spring to mind) and to improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.
The economics of two-sided markets are enlightening here. For example, Apple and Google’s app stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks.” That is, they compete aggressively (among themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users.
This dynamic gives firms significant incentive to continue to attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video, and games was one of the driving forces behind the launch of the iPad.
This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms, as would likely be the case under the American Innovation and Choice Online Act.
7. Large Market Share Does Not Mean Anticompetitive Outcomes
Scholars routinely cite the putatively strong concentration of digital markets to argue that Big Tech firms do not face strong competition. But this is a non sequitur. Indeed, as economists like Joseph Bertrand and William Baumol have shown, what matters is not whether markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, that alone will discipline incumbents’ behavior.
Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.
Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).
Finally, and perhaps most importantly, many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.
Unfortunately, critics’ failure to meaningfully grapple with these issues serves to shape the “conventional wisdom” in tech-policy debates.
8. Vertical Integration Generally Benefits Consumers
Vertical behavior of digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many such critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of these few studies are regularly overstated and, even if taken at face value, represent a just minuscule fraction of the collected evidence, which overwhelmingly supports vertical integration.
There is strong and longstanding empirical evidence that vertical integration is competitively benign. This includes widely acclaimed work by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade, whose meta-analysis led them to conclude:
[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.
In short, there is a substantial body of both empirical and theoretical research showing that vertical integration (and the potential vertical discrimination and exclusion to which it might give rise) is generally beneficial to consumers. While it is possible that vertical mergers or discrimination could sometimes cause harm, the onus is on the critics to demonstrate empirically where this occurs. No legitimate interpretation of the available literature would offer a basis for imposing a presumption against such behavior.
9. There Is No Such Thing as Data Network Effects
Although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.
This is supported by significant empirical evidence. As was shown by the survey pf the empirical literature that Geoff Manne and I performed (published in the George Mason Law Review), data generally entails diminishing marginal returns:
Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace.
Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.
10. Antitrust Enforcement Has Not Been Lax
The popular narrative has it that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and expanding dominant firms’ profit margins at the expense of consumers. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition. But both beliefs—lax enforcement and increased anticompetitive concentration—wither under more than cursory scrutiny.
The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies and private litigants, and there has been no shortage of federal and state investigations. The vast majority of Section 2 cases dismissed on the basis of the plaintiff’s failure to show anticompetitive effect were brought by private plaintiffs pursuing treble damages; given the incentives to bring weak cases, it cannot be inferred from such outcomes that antitrust law is ineffective. But, in any case, it is highly misleading to count the number of antitrust cases and, using that number alone, to make conclusions about how effective antitrust law is. Firms act in the shadow of the law, and deploy significant legal resources to make sure they avoid activity that would lead to enforcement actions. Thus, any given number of cases brought could be just as consistent with a well-functioning enforcement regime as with an ill-functioning one.
The upshot is that naïvely counting antitrust cases (or the purported lack thereof), with little regard for the behavior that is deterred or the merits of the cases that are dismissed does not tell us whether or not antitrust enforcement levels are optimal.
[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]
Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.
Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed.,strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.
I will highlight five areas of concern with the DMU proposal.
I. Chilling Effects
The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.
Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.
II. SMS Designations Should Not Apply to the Whole Firm
The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.
Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.
Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.
III. The DMU Cannot Be Evidence-based Given its Goals and Objectives
The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.
Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.
Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.
Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.
IV. Speed of Proposals
Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.
As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.
But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.
The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar. Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.
V. Antitrust Is Not Always the Answer
Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.
Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.
Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.
There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.
Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.
In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.
[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.
A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.
Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.
Discussion
As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.
Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).
MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”
Nevertheless, MMR rules would be subject to two substantial sources of legal risk.
The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).
An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.
The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:
The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.
In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.
Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.
Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.
Conclusion
At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.
But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.
Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms.
Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services.
All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.
The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.
In general, the bills are misguided for three main reasons.
One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars).
Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.
Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business.
The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.
Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including:
Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
Conditioning access or status on purchasing other products or services from the platform;
Using user data to support the platform’s own products in ways not extended to competitors;
Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
Restricting platform users from uninstalling software pre-installed on the platform;
Restricting platform users from providing links to facilitate business off of the platform;
Preferencing the platform’s own products or services in search results or rankings;
Interfering with how a dependent business prices its products;
Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
Retaliating against users who raise concerns with law enforcement about potential violations of the act.
On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.
Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face.
It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system.
This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.
Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).
Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.
This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors.
Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.
Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position.
The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals.
So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.
Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability.
Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such.
It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system.
Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator.
In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.
A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.
Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million.
In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places.
It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful.
Despite calls fromsomeNGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.
At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see?
But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.
It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.
It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.
Interoperability in the DMA
The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:
Mandated interoperability of “ancillary services” (Art 6(1)(f));
Real-time data portability (Art 6(1)(h)); and
Business-user access to their own and end-user data (Art 6(1)(i)).
The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.
However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.
The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.
The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”
Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.
The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.
Why can’t stronger interoperability be safely mandated today?
Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:
1. Data sharing and mining via new APIs;
2. New opportunities for phishing and sock puppetry in a federated ecosystem; and
3. More friction for platforms trying to maintain a secure system.
Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”
There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.
So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.
What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.
The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.