Archives For

Under a draft “adequacy” decision unveiled today by the European Commission, data-privacy and security commitments made by the United States in an October executive order signed by President Joe Biden were found to comport with the EU’s General Data Protection Regulation (GDPR). If adopted, the decision would provide a legal basis for flows of personal data between the EU and the United States.

This is a welcome development, as some national data-protection authorities in the EU have begun to issue serious threats to stop U.S.-owned data-related service providers from offering services to Europeans. Pending more detailed analysis, I offer some preliminary thoughts here.

Decision Responds to the New U.S. Data-Privacy Framework

The Commission’s decision follows the changes to U.S. policy introduced by Biden’s Oct. 7 executive order. In its July 2020 Schrems II judgment, the EU Court of Justice (CJEU) invalidated the prior adequacy decision on grounds that EU citizens lacked sufficient redress under U.S. law and that U.S. law was not equivalent to “the minimum safeguards” of personal data protection under EU law. The new executive order introduced redress mechanisms that include creating a civil-liberties-protection officer in the Office of the Director of National Intelligence (DNI), as well as a new Data Protection Review Court (DPRC). The DPRC is proposed as an independent review body that will make decisions that are binding on U.S. intelligence agencies.

The old framework had sparked concerns about the independence of the DNI’s ombudsperson, and what was seen as insufficient safeguards against external pressures that individual could face, including the threat of removal. Under the new framework, the independence and binding powers of the DPRC are grounded in regulations issued by the U.S. Attorney General.

To address concerns about the necessity and proportionality of U.S. signals-intelligence activities, the executive order also defines the “legitimate objectives” in pursuit of which such activities can be conducted. These activities would, according to the order, be conducted with the goal of “achieving a proper balance between the importance of the validated intelligence priority being advanced and the impact on the privacy and civil liberties of all persons, regardless of their nationality or wherever they might reside.”

Will the Draft Decision Satisfy the CJEU?

With this draft decision, the European Commission announced it has favorably assessed the executive order’s changes to the U.S. data-protection framework, which apply to foreigners from friendly jurisdictions (presumed to include the EU). If the Commission formally adopts an adequacy decision, however, the decision is certain to be challenged before the CJEU by privacy advocates. In my preliminary analysis after Biden signed the executive order, I summarized some of the concerns raised regarding two aspects relevant to the finding of adequacy: proportionality of data collection and availability of effective redress.

Opponents of granting an adequacy decision tend to rely on an assumption that a finding of adequacy requires virtually identical substantive and procedural privacy safeguards as required within the EU. As noted by the European Commission in the draft decision, this position is not well-supported by CJEU case law, which clearly recognizes that only “adequate level” and “essential equivalence” of protection are required from third-party countries under the GDPR.

To date, the CJEU has not had to specify in greater detail precisely what, in their view, these provisions mean. Instead, the Court has been able simply to point to certain features of U.S. law and practice that were significantly below the GDPR standard (e.g., that the official responsible for providing individual redress was not guaranteed to be independent from political pressure). Future legal challenges to a new Commission adequacy decision will most likely require the CJEU to provide more guidance on what “adequate” and “essentially equivalent” mean.

In the draft decision, the Commission carefully considered the features of U.S. law and practice that the Court previously found inadequate under the GDPR. Nearly half of the explanatory part of the decision is devoted to “access and use of personal data transferred from the [EU] by public authorities in the” United States, with the analysis grounded in CJEU’s Schrems II decision. The Commission concludes that, collectively, all U.S. redress mechanisms available to EU persons:

…allow individuals to have access to their personal data, to have the lawfulness of government access to their data reviewed and, if a violation is found, to have such violation remedied, including through the rectification or erasure of their personal data.

The Commission accepts that individuals have access to their personal data processed by U.S. public authorities, but clarifies that this access may be legitimately limited—e.g., by national-security considerations. Unlike some of the critics of the new executive order, the Commission does not take the simplistic view that access to personal data must be guaranteed by the same procedure that provides binding redress, including the Data Protection Review Court. Instead, the Commission accepts that other avenues, like requests under the Freedom of Information Act, may perform that function.

Overall, the Commission presents a sophisticated, yet uncynical, picture of U.S. law and practice. The lack of cynicism, e.g., about the independence of the DPRC adjudicative process, will undoubtedly be seen by some as naïve and unrealistic, even if the “realism” in this case is based on speculations of what might happen (e.g., secret changes to U.S. policy), rather than evidence. Given the changes adopted by the U.S. government, the key question for the CJEU will be whether to follow the Commission’s approach or that of the activists.

What Happens Next?

The draft adequacy decision will now be scrutinized by EU and national officials. It remains to be seen what will be the collective recommendation of the European Data Protection Board and of the representatives of EU national governments, but there are signs that some domestic data-protection authorities recognize that a finding of adequacy may be appropriate (see, e.g., the opinion from the Hamburg authority).

It is also likely that a significant portion of the European Parliament will be highly critical of the decision, even to the extent of recommending not to adopt it. Importantly, however, none of the consulted bodies have formal power to bind the European Commission on this question. The whole process is expected to take at least several months.

For many observers, the collapse of the crypto exchange FTX understandably raises questions about the future of the crypto economy, or even of public blockchains as a technology. The topic is high on the agenda of the U.S. Congress this week, with the House Financial Services Committee set for a Dec. 13 hearing with FTX CEO John J. Ray III and founder and former CEO Sam Bankman-Fried, followed by a Dec. 14 hearing of the Senate Banking Committee on “Crypto Crash: Why the FTX Bubble Burst and the Harm to Consumers.”

To some extent, the significance of the FTX case is likely to be exaggerated due to the outsized media attention that Bankman-Fried was able to generate. Nevertheless, many retail and institutional cryptocurrency holders were harmed by FTX and thus both users and policymakers will likely respond to what happened. In this post, I will contrast three perspectives on what may and should happen next for crypto.

‘Centralization Caused the FTX Fiasco’

The first perspective—likely the prevailing view in the crypto community—is that the FTX collapse was a failure of a centralized service, which should be emphatically distinguished from “true” or “crypto-native” decentralized services. The distinction between centralized and decentralized services is sharper in theory than in practice, and it should be seen as a spectrum of decentralization, rather than a simple binary distinction. There is, however, little doubt that crypto-asset exchanges like FTX, which predominantly operate “off-chain” (i.e., on their own servers, not on a public blockchain network) are the paradigmatic case of centralization in the crypto space. They are thus not “decentralized finance” (DeFi), even though much of DeFi today does rely on centralized services—e.g., for price discovery.

As Vivek Ramaswamy and Mark Lurie argued in their Wall Street Journal op-ed, the key feature of a centralized exchange (a “CEX”) “is that somebody (…) takes custody of user funds.” Even when custody is subject to government regulation—as in traditional stock exchanges—custody creates a risk that funds will be misappropriated or otherwise lost by the custodian, as reportedly happened at FTX.

By contrast, no single actor takes custody of customer funds on a decentralized exchange (DEX); these function as smart contracts, self-executing code run on a blockchain like Ethereum. DEX users do, however, face other risks, such as hacks, market manipulation, bugs in code, and situations that combine features of all three. Some of these risks are also present in traditional stock exchanges, but as crypto insiders recognize (see below), the scale and unpredictability of risks like bugs in smart contracts is potentially significant. But as Ramaswamy and Lurie observe, the largest DeFi protocols like “MakerDAO, Compound and Clipper hold more than $15 billion, and their user funds have never been hacked.”

Aside from the lack of custody, DeFi also offers public transparency in two key respects: transparency of the self-executing code powering the DEX and transparency of completed transactions. In contrast, part of what enabled the FTX debacle is that external observers were not able to monitor the financial situation of the centralized exchange. The solution commonly put forward for CEX services on the blockchain—proof of reserves—may not match the transparency that DEX services can offer. Even if a proof-of-reserves requirement provided a reliable, real-time view of an exchange’s assets, it is unlikely to be able to do so for its liabilities. Because it is a business, a CEX always may incur liabilities that are not visible—or not easily visible—on the blockchain, such as liability to pay damages.

Some have proposed that a CEX could establish trust by offering to each user legally binding “proof of insurance” from a reputable insurer. But this simply moves the locus of trust to the insurer, which may or may not be acceptable to users, depending on the circumstances.

‘The Ecosystem Needs Time to Mature Before We Get Even More Attention’

As a critique of today’s centralized crypto services, the first perspective is persuasive. The implication that decentralized solutions offer a fully ready alternative has been called into question, however, both within the crypto space and from the outside. One internal voice of caution has been Ethereum founder Vitalik Buterin, one of crypto’s key thought leaders. Writing shortly before the FTX collapse, Buterin said:

… I don’t think we should be enthusiastically pursuing large institutional capital at full speed. I’m actually kinda happy a lot of the ETFs are getting delayed. The ecosystem needs time to mature before we get even more attention.

He added:

… regulation that leaves the crypto space free to act internally but makes it harder for crypto projects to reach the mainstream is much less bad than regulation that intrudes on how crypto works internally.

Following the FTX collapse, Buterin elaborated on the risks he sees for decentralized crypto services, singling out vulnerabilities in smart-contract code as a major concern.

Buterin’s vision is one of a de facto regulatory sandbox, allowing experimentation and technological development, but combined with restrictions on the expanding integration of crypto with the broader economy.

Centralization Will Stay, but with Heavier Regulation

It is even more understandable that observers who come from traditional finance have reservations about the potential of decentralized services to replace the centralized ones, at least in the near term. One example is JPMorgan’s recent research report. The report predicts that institutional crypto custodians, not DeFi, will benefit the most from FTX’s collapse. According to JPMorgan, this will happen due to, among other factors:

  • Regulatory pressure to unbundle various roles in crypto-finance, such as brokerage-trading, lending, clearing, and custody. The argument is that—by combining trading, clearing, and settlement—DeFi solutions operate more efficiently than centralized services and will thus “face greater scrutiny.”
  • DeFi services being unattractive to large institutional investors because of lower transaction speeds and the public nature of blockchain transaction, both of which run counter to trading history and strategies.

The report listed several other concerns, including smart-contract risks (which Buterin also singled out) and front-running of trades (part of the wider “MEV” extraction phenomenon), which may lead to worse execution prices for a trader.

Those concerns do refer to real issues in DeFi although, as the report notes, there are solutions to address them under active development. But it is also important, when comparing the current state of DeFi to custodial finance, to assess the relative benefits of the latter realistically. For example, the risk of market manipulation in DeFi needs to be contrasted with how opaque custodial services are, creating opportunities for rent extraction at customer expense.

JPMorgan stressed that the likely reaction to the FTX collapse will be increased pressure for heavier regulation of custody of customer funds, transparency requirements and, as noted earlier, unbundling of various roles in crypto-finance. The report’s prediction that, in doing so, policymakers will not be inclined to distinguish between centralized and decentralized services may be accurate, but that would be an unfortunate and unwarranted outcome.

The risks that centralized services pose—due to their lack of transparency and their taking custody of customer funds—do not translate straightforwardly to decentralized services. Regarding unbundling, it should be noted that a key reason for this regulatory solution is to prevent conflicts of interests. But a DEX that operates autonomously according to publicly shared logic (open source code) does not pose the same conflict-of-interest risks that a CEX faces. Decentralized services do face risks and there may be good reasons to seek policy responses to those risks. But the unique features of decentralized services should be appropriately accommodated. Nevertheless, it is admittedly a challenging task, partially due to the difficulty of defining decentralization in the law.

Conclusion

The collapse of FTX was a failure of a centralized model of crypto-asset services. This does not mean that centralized services do not have a future, but more work will need to be done to build stakeholder trust. Moreover, the FTX affair clearly increased the pressure for additional regulation of centralized services, although it is unclear whether it will prompt certain specific regulatory responses.

Just before the FTX collapse, the EU had nearly finalized its Markets in Crypto-Assets (“MiCA”) Regulation that was intended to regulate centralized “crypto-assets service providers.” There is an argument to be made that MiCA might have stopped a situation like that at FTX, but—given the vague general language used in MiCA—whether this would happen in future cases depends chiefly on how regulators implement prudential oversight.

Given the well-known cases of sophisticated regulators failing to prevent harm—e.g., in MF Global and Wirecard—the mere existence of prudential oversight may be insufficient to ground trust in centralized services. Thus, JPMorgan’s thesis that centralized services will benefit from the FTX affair lacks sufficient justification. Perhaps, even without the involvement of regulators, centralized providers will develop mechanisms for reliable transparency—such as “proof of reserves”—although there is a significant risk here of mere “transparency theatre.”

As to decentralized crypto services, the FTX collapse may be a chance for broader adoption, but Buterin’s words of caution should not be dismissed. JPMorgan may also be right to suggest that policymakers will not be inclined to distinguish between centralized and decentralized services and that the pressure for increased regulation will spill over to DeFi. As I noted earlier, however, policymakers would do well to be attentive to the relevant differences. For example, centralized services pose risks due to lack of transparency and their control of customer funds—two significant risks do not necessarily apply to decentralized services. Hence, unbundling of the kind that could be beneficial for centralized services may bring little of value to a DEX, while risking giving up some core benefits of decentralized solutions.

European Union officials insist that the executive order President Joe Biden signed Oct. 7 to implement a new U.S.-EU data-privacy framework must address European concerns about U.S. agencies’ surveillance practices. Awaited since March, when U.S. and EU officials reached an agreement in principle on a new framework, the order is intended to replace an earlier data-privacy framework that was invalidated in 2020 by the Court of Justice of the European Union (CJEU) in its Schrems II judgment.

This post is the first in what will be a series of entries examining whether the new framework satisfies the requirements of EU law or, as some critics argue, whether it does not. The critics include Max Schrems’ organization NOYB (for “none of your business”), which has announced that it “will likely bring another challenge before the CJEU” if the European Commission officially decides that the new U.S. framework is “adequate.” In this introduction, I will highlight the areas of contention based on NOYB’s “first reaction.”

The overarching legal question that the European Commission (and likely also the CJEU) will need to answer, as spelled out in the Schrems II judgment, is whether the United States “ensures an adequate level of protection for personal data essentially equivalent to that guaranteed in the European Union by the GDPR, read in the light of Articles 7 and 8 of the [EU Charter of Fundamental Rights]” Importantly, as Theodore Christakis, Kenneth Propp, and Peter Swire point out, “adequate level” and “essential equivalence” of protection do not necessarily mean identical protection, either substantively or procedurally. The precise degree of flexibility remains an open question, however, and one that the EU Court may need to clarify to a much greater extent.

Proportionality and Bulk Data Collection

Under Article 52(1) of the EU Charter of Fundamental Rights, restrictions of the right to privacy must meet several conditions. They must be “provided for by law” and “respect the essence” of the right. Moreover, “subject to the principle of proportionality, limitations may be made only if they are necessary” and meet one of the objectives recognized by EU law or “the need to protect the rights and freedoms of others.”

As NOYB has acknowledged, the new executive order supplemented the phrasing “as tailored as possible” present in 2014’s Presidential Policy Directive on Signals Intelligence Activities (PPD-28) with language explicitly drawn from EU law: mentions of the “necessity” and “proportionality” of signals-intelligence activities related to “validated intelligence priorities.” But NOYB counters:

However, despite changing these words, there is no indication that US mass surveillance will change in practice. So-called “bulk surveillance” will continue under the new Executive Order (see Section 2 (c)(ii)) and any data sent to US providers will still end up in programs like PRISM or Upstream, despite of the CJEU declaring US surveillance laws and practices as not “proportionate” (under the European understanding of the word) twice.

It is true that the Schrems II Court held that U.S. law and practices do not “[correlate] to the minimum safeguards resulting, under EU law, from the principle of proportionality.” But it is crucial to note the specific reasons the Court gave for that conclusion. Contrary to what NOYB suggests, the Court did not simply state that bulk collection of data is inherently disproportionate. Instead, the reasons it gave were that “PPD-28 does not grant data subjects actionable rights before the courts against the US authorities” and that, under Executive Order 12333, “access to data in transit to the United States [is possible] without that access being subject to any judicial review.”

CJEU case law does not support the idea that bulk collection of data is inherently disproportionate under EU law; bulk collection may be proportionate, taking into account the procedural safeguards and the magnitude of interests protected in a given case. (For another discussion of safeguards, see the CJEU’s decision in La Quadrature du Net.) Further complicating the legal analysis here is that, as mentioned, it is far from obvious that EU law requires foreign countries offer the same procedural or substantive safeguards that are applicable within the EU.

Effective Redress

The Court’s Schrems II conclusion therefore primarily concerns the effective redress available to EU citizens against potential restrictions of their right to privacy from U.S. intelligence activities. The new two-step system proposed by the Biden executive order includes creation of a Data Protection Review Court (DPRC), which would be an independent review body with power to make binding decisions on U.S. intelligence agencies. In a comment pre-dating the executive order, Max Schrems argued that:

It is hard to see how this new body would fulfill the formal requirements of a court or tribunal under Article 47 CFR, especially when compared to ongoing cases and standards applied within the EU (for example in Poland and Hungary).

This comment raises two distinct issues. First, Schrems seems to suggest that an adequacy decision can only be granted if the available redress mechanism satisfies the requirements of Article 47 of the Charter. But this is a hasty conclusion. The CJEU’s phrasing in Schrems II is more cautious:

…Article 47 of the Charter, which also contributes to the required level of protection in the European Union, compliance with which must be determined by the Commission before it adopts an adequacy decision pursuant to Article 45(1) of the GDPR

In arguing that Article 47 “also contributes to the required level of protection,” the Court is not saying that it determines the required level of protection. This is potentially significant, given that the standard of adequacy is “essential equivalence,” not that it be procedurally and substantively identical. Moreover, the Court did not say that the Commission must determine compliance with Article 47 itself, but with the “required level of protection” (which, again, must be “essentially equivalent”).

Second, there is the related but distinct question of whether the redress mechanism is effective under the applicable standard of “required level of protection.” Christakis, Propp, and Swire offered a helpful analysis suggesting that it is, considering the proposed DPRC’s independence, effective investigative powers,  and authority to issue binding determinations. I will offer a more detailed analysis of this point in future posts.

Finally, NOYB raised a concern that “judgment by ‘Court’ [is] already spelled out in Executive Order.” This concern seems to be based on the view that a decision of the DPRC (“the judgment”) and what the DPRC communicates to the complainant are the same thing. Or in other words, that legal effects of a DPRC decision are exhausted by providing the individual with the neither-confirm-nor-deny statement set out in Section 3 of the executive order. This is clearly incorrect: the DPRC has power to issue binding directions to intelligence agencies. The actual binding determinations of the DPRC are not predetermined by the executive order, only the information to be provided to the complainant is.

What may call for closer consideration are issues of access to information and data. For example, in La Quadrature du Net, the CJEU looked at the difficult problem of notification of persons whose data has been subject to state surveillance, requiring individual notification “only to the extent that and as soon as it is no longer liable to jeopardise” the law-enforcement tasks in question. Given the “essential equivalence” standard applicable to third-country adequacy assessments, however, it does not automatically follow that individual notification is required in that context.

Moreover, it also does not necessarily follow that adequacy requires that EU citizens have a right to access the data processed by foreign government agencies. The fact that there are significant restrictions on rights to information and to access in some EU member states, though not definitive (after all, those countries may be violating EU law), may be instructive for the purposes of assessing the adequacy of data protection in a third country, where EU law requires only “essential equivalence.”

Conclusion

There are difficult questions of EU law that the European Commission will need to address in the process of deciding whether to issue a new adequacy decision for the United States. It is also clear that an affirmative decision from the Commission will be challenged before the CJEU, although the arguments for such a challenge are not yet well-developed. In future posts I will provide more detailed analysis of the pivotal legal questions. My focus will be to engage with the forthcoming legal analyses from Schrems and NOYB and from other careful observers.

European Union lawmakers appear close to finalizing a number of legislative proposals that aim to reform the EU’s financial-regulation framework in response to the rise of cryptocurrencies. Prominent within the package are new anti-money laundering and “countering the financing of terrorism” rules (AML/CFT), including an extension of the so-called “travel rule.” The travel rule, which currently applies to wire transfers managed by global banks, would be extended to require crypto-asset service providers to similarly collect and make available details about the originators and beneficiaries of crypto-asset transfers.

This legislative process proceeded with unusual haste in recent months, which partially explains why legal objections to the proposals have not been adequately addressed. The resulting legislation is fundamentally flawed to such an extent that some of its key features are clearly invalid under EU primary (treaty) law and liable to be struck down by the Court of Justice of the European Union (CJEU). 

In this post, I will offer a brief overview of some of the concerns, which I also discuss in this recent Twitter thread. I focus primarily on the travel rule, which—in the light of EU primary law—constitutes a broad and indiscriminate surveillance regime for personal data. This characterization also applies to most of AML/CFT.

The CJEU, the EU’s highest court, established a number of conditions that such legally mandated invasions of privacy must satisfy in order to be valid under EU primary law (the EU Charter of Fundamental Rights). The legal consequences of invalidity are illustrated well by the Digital Rights Ireland judgment, in which the CJEU struck down an entire piece of EU legislation (the Data Retention Directive). Alternatively, the CJEU could decide to interpret EU law as if it complied with primary law, even if that is contrary to the text.

The Travel Rule in the Transfer of Funds Regulation

The EU travel rule is currently contained in the 2015 Wire Transfer Regulation (WTR). But at the end of June, EU legislators reached a likely final deal on its replacement, the Transfer of Funds Regulation (TFR; see the original proposal from July 2021). I focus here on the TFR, but much of the argument also applies to the older WTR now in force. 

The TFR imposes obligations on payment-system providers and providers of crypto-asset transfers (refer to here, collectively, as “service providers”) to collect, retain, transfer to other service providers, and—in some cases—report to state authorities:

…information on payers and payees, accompanying transfers of funds, in any currency, and the information on originators and beneficiaries, accompanying transfers of crypto-assets, for the purposes of preventing, detecting and investigating money laundering and terrorist financing, where at least one of the payment or crypto-asset service providers involved in the transfer of funds or crypto-assets is established in the Union. (Article 1 TFR)

The TFR’s scope extends to money transfers between bank accounts or other payment accounts, as well as transfers of crypto assets other than peer-to-peer transfers without the involvement of a service provider (Article 2 TFR). Hence, the scope of the TFR includes, but is not limited to, all those who send or receive bank transfers. This constitutes the vast majority of adult EU residents.

The information that service providers are obligated to collect and retain (under Articles 4, 10, 14, and 21 TFR) include data that allow for the identification of both sides of a transfer of funds (the parties’ names, as well as the address, country, official personal document number, customer identification number, or the sender’s date and place of birth) and for linking their identity with the (payment or crypto-asset) account number or crypto-asset wallet address. The TFR also obligates service providers to collect and retain additional data to verify the accuracy of the identifying information “on the basis of documents, data or information obtained from a reliable and independent source” (Articles 4(4), 7(3), 14(5), 16(2) TFR). 

The scope of the obligation to collect and retain verification data is vague and is likely to lead service providers to require their customers to provide copies of passports, national ID documents, bank or payment-account statements, and utility bills, as is the case under the WTR and the 5th AML Directive. Such data is overwhelmingly likely to go beyond information on the civil identity of customers and will often, if not almost always, allow inferring even sensitive personal data about the customer.

The data-collection and retention obligations in the TFR are general and indiscriminate. No distinction is made in TFR’s data-collection and retention provisions based on likelihood of a connection with criminal activity, except for verification data in the case of transfers of funds (an exception not applicable to crypto assets). Even, the distinction in the case of verification data for transfers of funds (“has reasonable grounds for suspecting money laundering or terrorist financing”) arguably lacks the precision required under CJEU case law.

Analogies with the CJEU’s Passenger Name Records Decision

In late June, following its established approach in similar cases, the CJEU gave its judgment in the Ligue des droits humains case, which challenged the EU and Belgian regimes on passenger name records (PNR). The CJEU decided there that the applicable EU law, the PNR Directive, is valid under EU primary law. But it reached that result by interpreting some of the directive’s provisions in ways contrary to their express language and by deciding that some national legal rules implementing the directive are invalid. Some features of the PNR regime that were challenged by the court are strikingly similar to the TFR regime.

First, just like the TFR, the PNR rules imposed a five-year data-retention period for the data of all passengers, even where there is no “objective evidence capable of establishing a risk that relates to terrorist offences or serious crime having an objective link, even if only an indirect one, with those passengers’ air travel.” The court decided that this was a disproportionate restriction of the rights to privacy and to the protection of personal data under Articles 5-7 of the EU Charter of Fundamental Rights. Instead of invalidating the relevant article of the PNR Directive, the CJEU reinterpreted it as if it only allowed for five-year retention in cases where there is evidence of a relevant connection to criminality.

Applying analogous reasoning to the TFR, which imposes an indiscriminate five-year data retention period in its Article 21, the conclusion must be that this TFR provision is invalid under Articles 7-8 of the charter. Article 21 TFR may, at minimum, need to be recast to apply only to that transaction data where there is “objective evidence capable of establishing a risk” that it is connected to serious crime. The court also considered the issue of government access to data that has already been collected. Under the CJEU’s established interpretation of the EU Charter, “it is essential that access to retained data by the competent authorities be subject to a prior review carried out either by a court or by an independent administrative body.” In the PNR regime, at least some countries (such as Belgium) assigned this role to their “passenger information units” (PIUs). The court noted that a PIU is “an authority competent for the prevention, detection, investigation and prosecution of terrorist offences and of serious crime, and that its staff members may be agents seconded from the competent authorities” (e.g. from police or intelligence authorities). But according to the court:

That requirement of independence means that that authority must be a third party in relation to the authority which requests access to the data, in order that the former is able to carry out the review, free from any external influence. In particular, in the criminal field, the requirement of independence entails that the said authority, first, should not be involved in the conduct of the criminal investigation in question and, secondly, must have a neutral stance vis-a-vis the parties to the criminal proceedings …

The CJEU decided that PIUs do not satisfy this requirement of independence and, as such, cannot decide on government access to the retained data.

The TFR (especially its Article 19 on provision of information) does not provide for prior independent review of access to retained data. To the extent that such a review is conducted by Financial Intelligence Units (FIUs) under the AML Directive, concerns arise very similar to the treatment of PIUs under the PNR regime. While Article 32 of the AML Directive requires FIUs to be independent, that doesn’t necessarily mean that they are independent in the ways required of the authority that will decide access to retained data under Articles 7-8 of the EU Charter. For example, the AML Directive does not preclude the possibility of seconding public prosecutors, police, or intelligence officers to FIUs.

It is worth noting that none of the conclusions reached by the CJEU in the PNR case are novel; they are well-grounded in established precedent. 

A General Proportionality Argument

Setting aside specific analogies with previous cases, the TFR clearly has not been accompanied by a more general and fundamental reflection on the proportionality of its basic scheme in the light of the EU Charter. A pressing question is whether the TFR’s far-reaching restrictions of the rights established in Articles 7-8 of the EU Charter (and perhaps other rights, like freedom of expression in Article 11) are strictly necessary and proportionate. 

Arguably, the AML/CFT regime—including the travel rule—are significantly more costly and more rights-restricting than potential alternatives. The basic problem is that there is no reliable data on the relative effectiveness of measures like the travel rule. Defenders of the current AML/CFT regime focus on evidence that it contributes to preventing or prosecuting some crime. But this is not the relevant question when it comes to proportionality. The relevant question is whether those measures are as effective or more effective than alternative, less costly, and more privacy-preserving alternatives. One conservative estimate holds that AML compliance costs in Europe were “120 times the amount successfully recovered from criminals’ and exceeded the estimated total of criminal funds (including funds not seized or identified).” 

The fact that the current AML/CFT regime is a de facto global standard cannot serve as a sufficient justification either, given that EU fundamental law is perfectly comfortable in rejecting non-European law-enforcement practices (see the CJEU’s decision in Schrems). The travel rule has been unquestioningly imported to EU law from U.S. law (via FATF), where the standards of constitutional protection of privacy are much different than under the EU Charter. This fact would likely be noticed by the Court of Justice in any putative challenge to the TFR or other elements of the AML/CFT regime. 

Here, I only flag the possibility of a general proportionality challenge. Much more work needs to be done to flesh it out.

Conclusion

Due to the political and resource constraints of the EU legislative process, it is possible that the legislative proposals in the financial-regulation package did not receive sufficient legal scrutiny from the perspective of their compatibility with the EU Charter of Fundamental Rights. This hypothesis would explain the presence of seemingly clear violations, such as the indiscriminate five-year data-retention period. Given that none of the proposals has, as yet, been voted into law, making the legislators aware of the problem may help to address at least some of the issues.

Legal arguments about the AML/CFT regime’s incompatibility with the EU Charter should be accompanied with concrete alternative proposals to achieve the goals of preventing and combating serious crime that, according to the best evidence, the current AML/CFT regime does ineffectively. We need more regulatory imagination. For example, one part of the solution may be to properly staff and equip government agencies tasked with prosecuting financial crime.

But it’s also possible that the proposals, including the TFR, will be adopted broadly without amendment. In that case, the main recourse available to EU citizens (or to any EU government) will be to challenge the legality of the measures before the Court of Justice.

Just three weeks after a draft version of the legislation was unveiled by congressional negotiators, the American Data Privacy and Protection Act (ADPPA) is heading to its first legislative markup, set for tomorrow morning before the U.S. House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee.

Though the bill’s legislative future remains uncertain, particularly in the U.S. Senate, it would be appropriate to check how the measure compares with, and could potentially interact with, the comprehensive data-privacy regime promulgated by the European Union’s General Data Protection Regulation (GDPR). A preliminary comparison of the two shows that the ADPPA risks adopting some of the GDPR’s flaws, while adding some entirely new problems.

A common misconception about the GDPR is that it imposed a requirement for “cookie consent” pop-ups that mar the experience of European users of the Internet. In fact, this requirement comes from a different and much older piece of EU law, the 2002 ePrivacy Directive. In most circumstances, the GDPR itself does not require express consent for cookies or other common and beneficial mechanisms to keep track of user interactions with a website. Website publishers could likely rely on one of two lawful bases for data processing outlined in Article 6 of the GDPR:

  • data processing is necessary in connection with a contractual relationship with the user, or
  • “processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (unless overridden by interests of the data subject).

For its part, the ADPPA generally adopts the “contractual necessity” basis for data processing but excludes the option to collect or process “information identifying an individual’s online activities over time or across third party websites.” The ADPPA instead classifies such information as “sensitive covered data.” It’s difficult to see what benefit users would derive from having to click that they “consent” to features that are clearly necessary for the most basic functionality, such as remaining logged in to a site or adding items to an online shopping cart. But the expected result will be many, many more popup consent queries, like those that already bedevil European users.

Using personal data to create new products

Section 101(a)(1) of the ADPPA expressly allows the use of “covered data” (personal data) to “provide or maintain a specific product or service requested by an individual.” But the legislation is murkier when it comes to the permissible uses of covered data to develop new products. This would only clearly be allowed where each data subject concerned could be asked if they “request” the specific future product. By contrast, under the GDPR, it is clear that a firm can ask for user consent to use their data to develop future products.

Moving beyond Section 101, we can look to the “general exceptions” in Section 209 of the ADPPA, specifically the exception in Section 209(a)(2)):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to perform system maintenance, diagnostics, maintain a product or service for which such covered data was collected, conduct internal research or analytics to improve products and services, perform inventory management or network management, or debug or repair errors that impair the functionality of a service or product for which such covered data was collected by the covered entity, except such data shall not be transferred.

While this provision mentions conducting “internal research or analytics to improve products and services,” it also refers to “a product or service for which such covered data was collected.” The concern here is that this could be interpreted as only allowing “research or analytics” in relation to existing products known to the data subject.

The road ends here for personal data that the firm collects itself. Somewhat paradoxically, the firm could more easily make the case for using data obtained from a third party. Under Section 302(b) of the ADPPA, a firm only has to ensure that it is not processing “third party data for a processing purpose inconsistent with the expectations of a reasonable individual.” Such a relatively broad “reasonable expectations” basis is not available for data collected directly by first-party covered entities.

Under the GDPR, aside from the data subject’s consent, the firm also could rely on its own “legitimate interest” as a lawful basis to process user data to develop new products. It is true, however, that due to requirements that the interests of the data controller and the data subject must be appropriately weighed, the “legitimate interest” basis is probably less popular in the EU than alternatives like consent or contractual necessity.

Developing this path in the ADPPA would arguably provide a more sensible basis for data uses like the reuse of data for new product development. This could be superior even to express consent, which faces problems like “consent fatigue.” These are unlikely to be solved by promulgating detailed rules on “affirmative consent,” as proposed in Section 2 of the ADPPA.

Problems with ‘de-identified data’

Another example of significant confusion in the ADPPA’s the basic conceptual scheme is the bill’s notion of “de-identified data.” The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. The definition covers: “information that does not identify and is not linked or reasonably linkable to an individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data, but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to non-personal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” (that is, all data that are not considered “covered” data):

  1. to take “reasonable measures to ensure that the information cannot, at any point, be used to re-identify any individual or device”;
  2. to publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de- identified form without any reasonable means for re- identification; and
    1. to not attempt to re-identify the information with any individual or device;”
  3. to “contractually obligate[] any person or entity that receives the information from the covered entity to comply with all of the” same rules.

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from effective attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification. Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is effectively to impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Conclusion

The basic conceptual structure of the legislation that subcommittee members will take up this week is, to a very significant extent, both confused and confusing. Perhaps in tomorrow’s markup, a more open and detailed discussion of what the drafters were trying to achieve could help to improve the scheme, as it seems that some key provisions of the current draft would lead to absurd results (e.g., those directly contrary to the principle of data minimization).

Given that the GDPR is already a well-known point of reference, including for U.S.-based companies and privacy professionals, the ADPPA may do better to re-use the best features of the GDPR’s conceptual structure while cutting its excesses. Re-inventing the wheel by proposing new concepts did not work well in this ADPPA draft.

The European Union’s Digital Markets Act (DMA) has been finalized in principle, although some legislative details are still being negotiated. Alas, our earlier worries about user privacy still have not been addressed adequately.

The key rules to examine are the DMA’s interoperability mandates. The most recent DMA text introduced a potentially very risky new kind of compulsory interoperability “of number-independent interpersonal communications services” (e.g., for services like WhatsApp). However, this obligation comes with a commendable safeguard in the form of an equivalence standard: interoperability cannot lower the current level of user security. Unfortunately, the DMA’s other interoperability provisions lack similar security safeguards.

The lack of serious consideration of security issues is perhaps best illustrated by how the DMA might actually preclude makers of web browsers from protecting their users from some of the most common criminal attacks, like phishing.

Key privacy concern: interoperability mandates

​​The original DMA proposal included several interoperability and data-portability obligations regarding the “core platform services” of platforms designated as “gatekeepers”—i.e., the largest online platforms. Those provisions were changed considerably during the legislative process. Among its other provisions, the most recent (May 11, 2022) version of the DMA includes:

  1. a prohibition on restricting users—“technically or otherwise”—from switching among and subscribing to software and services “accessed using the core platform services of the gatekeeper” (Art 6(6));
  2. an obligation for gatekeepers to allow interoperability with their operating system or virtual assistant (Art 6(7)); and
  3. an obligation “on interoperability of number-independent interpersonal communications services” (Art 7).

To varying degrees, these provisions attempt to safeguard privacy and security interests, but the first two do so in a clearly inadequate way.

First, the Article 6(6) prohibition on restricting users from using third-party software or services “accessed using the core platform services of the gatekeeper” notably applies to web services (web content) that a user can access through the gatekeeper’s web browser (e.g., Safari for iOS). (Web browsers are defined as core platform services in Art 2(2) DMA.)

Given that web content is typically not installed in the operating system, but accessed through a browser (i.e., likely “accessed using a core platform service of the gatekeeper”), earlier “side-loading” provisions (Article 6(4), which is discussed further below) would not apply here. This leads to what appears to be a significant oversight: the gatekeepers appear to be almost completely disabled from protecting their users when they use the Internet through web browsers, one of the most significant channels of privacy and security risks.

The Federal Bureau of Investigation (FBI) has identified “phishing” as one of the three top cybercrime types, based on the number of victim complaints. A successful phishing attack normally involves a user accessing a website that is impersonating a service the user trusts (e.g., an email account or corporate login). Browser developers can prevent some such attacks, e.g., by keeping “block lists” of websites known to be malicious and warning about, or even preventing, access to such sites. Prohibiting platforms from restricting their users’ access to third-party services would also prohibit this vital cybersecurity practice.

Under Art 6(4), in the case of installed third-party software, the gatekeepers can take:

…measures to ensure that third party software applications or software application stores do not endanger the integrity of the hardware or operating system provided by the gatekeeper, provided that such measures go no further than is strictly necessary and proportionate and are duly justified by the gatekeeper.

The gatekeepers can also apply:

measures and settings other than default settings, enabling end users to effectively protect security in relation to third party software applications or software application stores, provided that such measures and settings go no further than is strictly necessary and proportionate and are duly justified by the gatekeeper.

None of those safeguards, insufficient as they are—see the discussion below of Art 6(7)—are present in Art 6(6). Worse still is that the anti-circumvention rule in Art 13(6) applies here, prohibiting gatekeepers from offering “choices to the end-user in a non-neutral manner.” That is precisely what a web-browser developer does when warning users of security risks or when blocking access to websites known to be malicious—e.g., to protect users from phishing attacks.

This concern is not addressed by the general provision in Art 8(1) requiring the gatekeepers to ensure “that the implementation” of the measures under the DMA complies with the General Data Protection Regulation (GDPR), as well as “legislation on cyber security, consumer protection, product safety.”

The first concern is that this would not allow the gatekeepers to offer a higher standard of user protection than that required by the arguably weak or overly vague existing legislation. Also, given that the DMA’s rules (including future delegated legislation) are likely to be more specific—in the sense of constituting lex specialis—than EU rules on privacy and security, establishing a coherent legal interpretation that would allow gatekeepers to protect their users is likely to be unnecessarily difficult.

Second, the obligation from Art 6(7) for gatekeepers to allow interoperability with their operating system or virtual assistant only includes the first kind of a safeguard from Art 6(4), concerning the risk of compromising “the integrity of the operating system, virtual assistant or software features provided by the gatekeeper.” However, the risks from which service providers aim to protect users are by no means limited to system “integrity.” A user may be a victim of, e.g., a phishing attack that does not explicitly compromise the integrity of the software they used.

Moreover, as in Art 6(4), there is a problem with the “strictly necessary and proportionate” qualification. This standard may be too high and may push gatekeepers to offer more lax security to avoid liability for adopting measures that would be judged by European Commission and the courts as going beyond what is strictly necessary or indispensable.

The relevant recitals from the DMA preamble, instead of aiding in interpretation, add more confusion. The most notorious example is in recital 50, which states that gatekeepers “should be prevented from implementing” measures that are “strictly necessary and proportionate” to effectively protect user security “as a default setting or as pre-installation.” What possible justification can there be for prohibiting providers from setting a “strictly necessary” security measure as a default? We can hope that this manifestly bizarre provision will be corrected in the final text, together with the other issues identified above.

Finally, there is the obligation “on interoperability of number-independent interpersonal communications services” from Art 7. Here, the DMA takes a different and much better approach to safeguarding user privacy and security. Art 7(3) states that:

The level of security, including the end-to-end encryption, where applicable, that the gatekeeper provides to its own end users shall be preserved across the interoperable services.

There may be some concern that the Commission or the courts will not treat this rule with sufficient seriousness. Ensuring that user security is not compromised by interoperability may take a long time and may require excluding many third-party services that had hoped to benefit from this DMA rule. Nonetheless, EU policymakers should resist watering down the standard of equivalence in security levels, even if it renders Art 7 a dead letter for the foreseeable future.

It is also worth noting that there will be no presumption of user opt-in to any interoperability scheme (Art 7(7)-(8)), which means that third-party service providers will not be able to simply “onboard” all users from a gatekeeper’s service without their explicit consent. This is to be commended.

Conclusion

Despite some improvements (the equivalence standard in Art 7(3) DMA), the current DMA language still betrays, as I noted previously, “a policy preference for privileging uncertain and speculative competition gains at the cost of introducing new and clear dangers to information privacy and security.” Jane Bambauer of the University of Arizona Law School came to similar conclusions in her analysis of the DMA, in which she warned:

EU lawmakers should be aware that the DMA is dramatically increasing the risk that data will be mishandled. Nevertheless, even though a new scandal from the DMA’s data interoperability requirement is entirely predictable, I suspect EU regulators will evade public criticism and claim that the gatekeeping platforms are morally and financially responsible.

The DMA’s text is not yet entirely finalized. It may still be possible to extend the approach adopted in Article 7(3) to other privacy-threatening rules, especially in Article 6. Such a requirement that any third-party service providers offer at least the same level of security as the gatekeepers is eminently reasonable and is likely what the users themselves would expect. Of course, there is always a risk that a safeguard of this kind will be effectively nullified in administrative or judicial practice, but this may not be very likely, given the importance that EU courts typically attach to privacy.

European Union (EU) legislators are now considering an Artificial Intelligence Act (AIA)—the original draft of which was published by the European Commission in April 2021—that aims to ensure AI systems are safe in a number of uses designated as “high risk.” One of the big problems with the AIA is that, as originally drafted, it is not at all limited to AI, but would be sweeping legislation covering virtually all software. The EU governments seem to have realized this and are trying to fix the proposal. However, some pressure groups are pushing in the opposite direction. 

While there can be reasonable debate about what constitutes AI, almost no one would intuitively consider most of the software covered by the original AIA draft to be artificial intelligence. Ben Mueller and I covered this in more detail in our report “More Than Meets The AI: The Hidden Costs of a European Software Law.” Among other issues, the proposal would seriously undermine the legitimacy of the legislative process: the public is told that a law is meant to cover one sphere of life, but it mostly covers something different. 

It also does not appear that the Commission drafters seriously considered the costs that would arise from imposing the AIA’s regulatory regime on virtually all software across a sphere of “high-risk” uses that include education, employment, and personal finance.

The following example illustrates how the AIA would work in practice: A school develops a simple logic-based expert system to assist in making decisions related to admissions. It could be as basic as a Microsoft Excel macro that checks if a candidate is in the school’s catchment area based on the candidate’s postal code, by comparing the content of one column of a spreadsheet with another column. 

Under the AIA’s current definitions, this would not only be an “AI system,” but also a “high-risk AI system” (because it is “intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions” – Annex III of the AIA). Hence, to use this simple Excel macro, the school would be legally required to, among other things:

  1. put in place a quality management system;
  2. prepare detailed “technical documentation”;
  3. create a system for logging and audit trails;
  4. conduct a conformity assessment (likely requiring costly legal advice);
  5. issue an “EU declaration of conformity”; and
  6. register the “AI system” in the EU database of high-risk AI systems.

This does not sound like proportionate regulation. 

Some governments of EU member states have been pushing for a narrower definition of an AI system, drawing rebuke from pressure groups Access Now and Algorithm Watch, who issued a statement effectively defending the “all-software” approach. For its part, the European Council, which represents member states, unveiled compromise text in November 2021 that changed general provisions around the AIA’s scope (Article 3), but not the list of in-scope techniques (Annex I).

While the new definition appears slightly narrower, it remains overly broad and will create significant uncertainty. It is likely that many software developers and users will require costly legal advice to determine whether a particular piece of software in a particular use case is in scope or not. 

The “core” of the new definition is found in Article(3)(1)(ii), according to which an AI system is one that: “infers how to achieve a given set of human-defined objectives using learning, reasoning or modeling implemented with the techniques and approaches listed in Annex I.” This redefinition does precious little to solve the AIA’s original flaws. A legal inquiry focused on an AI system’s capacity for “reasoning” and “modeling” will tend either toward overinclusion or toward imagining a software reality that doesn’t exist at all. 

The revised text can still be interpreted so broadly as to cover virtually all software. Given that the list of in-scope techniques (Annex I) was not changed, any “reasoning” implemented with “Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems” (i.e., all software) remains in scope. In practice, the combined effect of those two provisions will be hard to distinguish from the original Commission draft. In other words, we still have an all-software law, not an AI-specific law. 

The AIA deliberations highlight two basic difficulties in regulating AI. First, it is likely that many activists and legislators have in mind science-fiction scenarios of strong AI (or “artificial general intelligence”) when pushing for regulations that will apply in a world where only weak AI exists. Strong AI is AI that is at least equal to human intelligence and is therefore capable of some form of agency. Weak AI is akin to  software techniques that augment human processing of information. For as long as computer scientists have been thinking about AI, there have been serious doubts that software systems can ever achieve generalized intelligence or become strong AI. 

Thus, what’s really at stake in regulating AI is regulating software-enabled extensions of human agency. But leaving aside the activists who explicitly do want controls on all software, lawmakers who promote the AIA have something in mind conceptually distinct from “software.” This raises the question of whether the “AI” that lawmakers imagine they are regulating is actually a null set. These laws can only regulate the equivalent of Excel spreadsheets at scale, and lawmakers need to think seriously about how they intervene. For such interventions to be deemed necessary, there should at least be quantifiable consumer harms that require redress. Focusing regulation on such broad topics as “AI” or “software” is almost certain to generate unacceptable unseen costs.

Even if we limit our concern to the real, weak AI, settling on an accepted “scientific” definition will be a challenge. Lawmakers inevitably will include either too much or too little. Overly inclusive regulation may seem like a good way to “future proof” the rules, but such future-proofing comes at the cost of significant legal uncertainty. It will also come at the cost of making some uses of software too costly to be worthwhile.

There has been a wave of legislative proposals on both sides of the Atlantic that purport to improve consumer choice and the competitiveness of digital markets. In a new working paper published by the Stanford-Vienna Transatlantic Technology Law Forum, I analyzed five such bills: the EU Digital Services Act, the EU Digital Markets Act, and U.S. bills sponsored by Rep. David Cicilline (D-R.I.), Rep. Mary Gay Scanlon (D-Pa.), Sen. Amy Klobuchar (D-Minn.) and Sen. Richard Blumenthal (D-Conn.). I concluded that all those bills would have negative and unaddressed consequences in terms of information privacy and security.

In this post, I present the main points from the working paper regarding two regulatory solutions: (1) mandating interoperability and (2) mandating device neutrality (which leads to a possibility of sideloading applications, a special case of interoperability.) The full working paper  also covers the risks of compulsory data access (by vetted researchers or by authorities).

Interoperability

Interoperability is increasingly presented as a potential solution to some of the alleged problems associated with digital services and with large online platforms, in particular (see, e.g., here and here). For example, interoperability might allow third-party developers to offer different “flavors” of social-media newsfeeds, with varying approaches to content ranking and moderation. This way, it might matter less than it does now what content moderation decisions Facebook or other platforms make. Facebook users could choose alternative content moderators, delivering the kind of news feed that those users expect.

The concept of interoperability is popular not only among thought leaders, but also among legislators. The DMA, as well as the U.S. bills by Rep. Scanlon, Rep. Cicilline, and Sen. Klobuchar, all include interoperability mandates.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. It is telling that supporters of interoperability mandates use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns, due to its prioritization of interoperability (see, e.g., here).

Using currently available technology to provide alternative interfaces or moderation services for social-media platforms, third-party developers would have to be able to access much of the platform content that is potentially available to a user. This would include not just content produced by users who explicitly agree to share their data with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It does not require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

There are several constraints for interoperability frameworks that must be in place to safeguard privacy and security effectively.

First, solutions should be targeted toward real users of digital services, without assuming away some common but inconvenient characteristics. In particular, solutions should not assume unrealistic levels of user interest and technical acumen.

Second, solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.

If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.

It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.

Art 6(1) of the original DMA proposal included some general interoperability provisions applicable to “gatekeepers”—i.e., the largest online platforms. Those interoperability mandates were somewhat limited – applying only to “ancillary services” (e.g., payment or identification services) or requiring only one-way data portability. However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately.

The drafts of the DMA adopted by the European Council and by the European Parliament attempt to address that, but they only allow gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament). This standard may be too high and could push gatekeepers to offer lower security to avoid liability for adopting measures that would be judged by EU institutions and the courts as going beyond what is strictly necessary or indispensable.

The more recent DMA proposal from the European Parliament goes significantly beyond the original proposal, mandating full interoperability of a number of “independent interpersonal communication services” and of social-networking services. The Parliament’s proposals are good examples of overly broad and irresponsible interoperability mandates. They would cover “any providers” wanting to interconnect with gatekeepers, without adequate vetting. The safeguard proviso mentioning “high level of security and personal data protection” does not come close to addressing the seriousness of the risks created by the mandate. Instead of facing up to the risks and ensuring that the mandate itself be limited in ways that minimize them, the proposal seems just to expect that the gatekeepers can solve the problems if they only “nerd harder.”

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

restrict or impede the capacity of a business user to access or interoperate with the same platform, operating system, hardware and software features that are available to the covered platform operator’s own products, services, or lines of business.

The language of the prohibition in Sen. Klobuchar’s American Innovation and Choice Online Act (AICOA) is similar (also in Section 2(b)(1)). Both ACIOA and AICOA allow for affirmative defenses that a service provider could use if sued under the statute. While those defenses mention privacy and security, they are narrow (“narrowly tailored, could not be achieved through a less discriminatory means, was nonpretextual, and was necessary”) and would not prevent service providers from incurring significant litigation costs. Hence, just like the provisions of the DMA, they would heavily incentivize covered service providers not to adopt the most effective protections of privacy and security.

Device Neutrality (Sideloading)

Article 6(1)(c) of the DMA contains specific provisions about “sideloading”—i.e., allowing installation of third-party software through alternative app stores other than the one provided by the manufacturer (e.g., Apple’s App Store for iOS devices). A similar express provision for sideloading is included in Sen. Blumenthal’s Open App Markets Act (Section 3(d)(2)). Moreover, the broad interoperability provisions in the other U.S. bills discussed above may also be interpreted to require permitting sideloading.

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS). By taking away the choice of a walled garden environment, a sideloading mandate will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny.

This is not to say that Apple’s app scrutiny is perfect, but it is reasonable for an ordinary user to prefer Apple’s approach because it provides greater security (see, e.g., here and here). Thus, a legislative choice to override the revealed preference of millions of users for a “walled garden” approach should not be made lightly. 

Privacy and security safeguards in the DMA’s sideloading provisions, as amended by the European Council and by the European Parliament, as well as in Sen. Blumenthal’s Open App Markets Act, share the same problem of narrowness as the safeguards discussed above.

There is a more general privacy and security issue here, however, that those safeguards cannot address. The proposed sideloading mandate would prohibit outright a privacy and security-protection model that many users rationally choose today. Even with broader exemptions, this loss will be genuine. It is unclear whether taking away this choice from users is justified.

Conclusion

All the U.S. and EU legislative proposals considered here betray a policy preference of privileging uncertain and speculative competition gains at the expense of introducing a new and clear danger to information privacy and security. The proponents of these (or even stronger) legislative interventions seem much more concerned, for example, that privacy safeguards are “not abused by Apple and Google to protect their respective app store monopoly in the guise of user security” (source).

Given the problems with ensuring effective enforcement of privacy protections (especially with respect to actors coming from outside the EU, the United States, and other broadly privacy-respecting jurisdictions), the lip service paid by the legislative proposals to privacy and security is not much more than that. Policymakers should be expected to offer a much more detailed vision of concrete safeguards and mechanisms of enforcement when proposing rules that come with significant and entirely predictable privacy and security risks. Such vision is lacking on both sides of the Atlantic.

I do not want to suggest that interoperability is undesirable. The argument of this paper was focused on legally mandated interoperability. Firms experiment with interoperability all the time—the prevalence of open APIs on the Internet is testament to this. My aim, however, is to highlight that interoperability is complex and exposes firms and their users to potentially large-scale cyber vulnerabilities.

Generalized obligations on firms to open their data, or to create service interoperability, can short-circuit the private ordering processes that seek out those forms of interoperability and sharing that pass a cost-benefit test. The result will likely be both overinclusive and underinclusive. It would be overinclusive to require all firms in the regulated class to broadly open their services and data to all interested parties, even where it wouldn’t make sense for privacy, security, or other efficiency reasons. It is underinclusive in that the broad mandate will necessarily sap regulated firms’ resources and deter them from looking for new innovative uses that might make sense, but that are outside of the broad mandate. Thus, the likely result is less security and privacy, more expense, and less innovation.

Despite calls from some NGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see? 

But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.

It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.

Interoperability in the DMA

The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:

  1. Mandated interoperability of “ancillary services” (Art 6(1)(f)); 
  2. Real-time data portability (Art 6(1)(h)); and
  3. Business-user access to their own and end-user data (Art 6(1)(i)). 

The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.

However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.

The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.

The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”

Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.

The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.

Why can’t stronger interoperability be safely mandated today?

Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:

1. Data sharing and mining via new APIs;

2. New opportunities for phishing and sock puppetry in a federated ecosystem; and

3. More friction for platforms trying to maintain a secure system.

Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”

There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.

So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.

What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.

The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.