Archives For privacy

The Federal Trade Commission (FTC) might soon be charging rent to Meta Inc. The commission earlier this week issued (bear with me) an “Order to Show Cause why the Commission should not modify its Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (July 27, 2012), as modified by Order Modifying Prior Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (Apr. 27, 2020).”

It’s an odd one (I’ll get to that) and the third distinct Meta matter for the FTC in 2023.

Recall that the FTC and Meta faced off in federal court earlier this year, as the commission sought a preliminary injunction to block the company’s acquisition of virtual-reality studio Within Unlimited. As I wrote in a prior post, U.S. District Court Judge Edward J. Davila denied the FTC’s request in late January. Davila’s order was about more than just the injunction: it was predicated on the finding that the FTC was not likely to prevail in its antitrust case. That was not entirely surprising outside FTC HQ (perhaps not inside either), as I was but one in a long line of observers who had found the FTC’s case to be weak.

No matter for the not-yet-proposed FTC Bureau of Let’s-Sue-Meta, as there’s another FTC antitrust matter pending: the commission also seeks to unwind Facebook’s 2012 acquisition of Instagram and its 2014 acquisition of WhatsApp, even though the FTC reviewed both mergers at the time and allowed them to proceed. Apparently, antitrust apples are never too old for another bite. The FTC’s initial case seeking to unwind the earlier deals was dismissed, but its amended complaint has survived, and the case remains to be heard.

Back to the modification of the 2020 consent order, which famously set a record for privacy remedies: $5 billion, plus substantial behavioral remedies to run for 20 years (with the monetary penalty exceeding the EU’s highest by an order of magnitude). Then-Chair Joe Simons and then-Commissioners Noah Phillips and Christine Wilson accurately claimed that the settlement was “unprecedented, both in terms of the magnitude of the civil penalty and the scope of the conduct relief.” Two commissioners—Rebecca Slaughter and Rohit Chopra—dissented: they thought the unprecedented remedies inadequate.

I commend Chopra’s dissent, if only as an oddity. He rightly pointed out that the commissioners’ analysis of the penalty was “not empirically well grounded.” At no time did the commission produce an estimate of the magnitude of consumer harm, if any, underlying the record-breaking penalty. It never claimed to.

That’s odd enough. But then Chopra opined that “a rigorous analysis of unjust enrichment alone—which, notably, the Commission can seek without the assistance of the Attorney General—would likely yield a figure well above $5 billion.” That subjective likelihood also seemed to lack an empirical basis; certainly, Chopra provided none.

By all accounts, then, the remedies appeared to be wholly untethered from the magnitude of consumer harm wrought by the alleged violations. To be clear, I’m not disputing that Facebook violated the 2012 order, such that a 2019 complaint was warranted, even if I wonder now, as I wondered then, how a remedy that had nothing to do with the magnitude of harm could be an efficient one. 

Now, Commissioner Alvaro Bedoya has issued a statement correctly acknowledging that “[t]here are limits to the Commission’s order modification authority.” Specifically, the commission must “identify a nexus between the original order, the intervening violations, and the modified order.” Bedoya wrote that he has “concerns about whether such a nexus exists” for one of the proposed modifications. He still voted to go ahead with the proposal, as did Slaughter and Chair Lina Khan, voicing no concerns at all.

It’s odder, still. In its heavily redacted order, the commission appears to ground its proposal in conduct alleged to have occurred before the 2020 order that it now seeks to modify. There are no intervening violations there. For example:

From December 2017 to July 2019, Respondent also made misrepresentations relating to its Messenger Kids (“MK”) product, a free messaging and video calling application “specifically intended for users under the age of 13.”

. . . [Facebook] represented that MK users could communicate in MK with only parent-approved contacts. However, [Facebook] made coding errors that resulted in children participating in group text chats and group video calls with unapproved contacts under certain circumstances.

Perhaps, but what circumstances? According to Meta (and the FTC), Meta discovered, corrected, and reported the coding errors to the FTC in 2019. Of course, Meta is bound to comply with the 2020 Consent Order. But were they bound to do so in 2019? They’ve always been subject to the FTC’s “unfair and deceptive acts and practices” (UDAP) authority, but why allege 2019 violations now?

What harm is being remedied? On the one hand, there seems to have been an inaccurate statement about something parents might care about: a representation that users could communicate in Messenger Kids only with parent-approved contacts. On the other hand, there’s no allegation that such communications (with approved contacts of the approved contacts) led to any harm to the kids themselves.

Given all of that, why does the commission seek to impose substantial new requirements on Meta? For example, the commission now seeks restrictions on Meta:

…collecting, using, selling, licensing, transferring, sharing, disclosing, or otherwise benefitting from Covered Information collected from Youth Users for the purposes of developing, training, refining, improving, or otherwise benefitting Algorithms or models; serving targeted advertising, or enriching Respondent’s data on Youth users.

There’s more, but that’s enough to have “concerns about” the existence of a nexus between the since-remedied coding errors and the proposed “modification.” Or to put it another way, I wonder what one has to do with the other.

The only violation alleged to have occurred after the 2020 consent order was finalized has to do with the initial 2021 report of the assessor—an FTC-approved independent monitor of Facebook/Meta’s compliance—covering the period from October 25, 2020 to April 22, 2021. There, the assessor reported that:

 …the key foundational elements necessary for an effective [privacy] program are in place . . . [but] substantial additional work is required, and investments must be made, in order for the program to mature.

We don’t know what this amounts to. The initial assessment reported that the basic elements of the firm’s “comprehensive privacy program” were in place, but that substantial work remained. Did progress lag expectations? What were the failings? Were consumers harmed? Did Facebook/Meta fail to address deficiencies identified in the report? If so, for how long? We’re not told a thing. 

Again, what’s the nexus? And why the requirement that Meta “delete Covered Information collected from a User as a Youth unless [Meta] obtains Affirmative Express Consent from the User within a reasonable time period, not to exceed six (6) months after the User’s eighteenth birthday”? That’s a worry, not because there’s nothing there, but because substantial additional costs are being imposed without any account of their nexus to consumer harm, supposing there is one.

Some might prefer such an opt-in policy—one of two that would be required under the proposed modification—but it’s not part of the 2020 consent agreement and it’s not otherwise part of U.S. law. It does resemble a requirement under the EU’s General Data Protection Regulation. But the GDPR is not U.S. law and there are good reasons for that— see, for example, here, here, here, and here.

For one thing, a required opt-in for all such information, in all the ways that it may live on in the firm’s data and models—can be onerous for users and not just the firm. Will young adults be spared concrete harms because of the requirement? It’s highly likely that they’ll have less access to information (and to less information), but highly unlikely that the reduction will be confined to that to which they (and their parents) would not consent. What will be the net effect?

Requirements “[p]rior to … introducing any new or modified products, services, or features” raise a question about the level of grain anticipated, given that limitations on the use of covered information apply to the training, refining, or improving of any algorithm or model, and that products, services, or features might be modified in various ways daily, or even in real time. Any such modifications require that the most recent independent assessment report find that all the many requirements of the mandated privacy program have been met. If not, then nothing new—including no modifications—is permitted until the assessor provides written confirmation that all material gaps and weaknesses have been “fully” remediated.

Is this supposed to entail independent oversight of every design decision involving information from youth users? Automated modifications? Or that everything come to a halt if any issues are reported? I gather that nobody—not even Meta—proposes to give the company carte blanche with youth information. But carte blanque?

As we’ve been discussing extensively at today’s International Center for Law & Economics event on congressional oversight of the commission, the FTC has a dual competition and consumer-protection enforcement mission. Efficient enforcement of the antitrust laws requires, among other things, that the costs of violations (including remedies) reflect the magnitude of consumer harm. That’s true for privacy, too. There’s no route to coherent—much less complementary—FTC-enforcement programs if consumer protection imposes costs that are wholly untethered from the harms it is supposed to address. 

Four prominent horsemen of the Biden administration’s bureaucratic apocalypse—the Federal Trade Commission (FTC), U.S. Justice Department (DOJ) Civil Rights Division (DOJ), Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC)—came together April 25 to issue a joint statement pledging vigorous enforcement against illegal activity perpetrated through the use of artificial intelligence (AI) and automated systems.

AI is, of course, very much in the news these days. And when AI is used to violate the law, it obviously is fully subject to enforcement scrutiny. But why make a big splash announcement merely to state a truism?

One suspects there is more to the story. The language of the joint statement, together with the FTC’s accompanying press release, provide some hints. Those hints point to a campaign by the administration to effectuate de facto bureaucratic regulation of AI through overly expansive interpretations of existing law. The following discussion will focus on the FTC’s role in this initiative.

Discussion

The FTC’s brief press release embodies a broad view of AI-related wrongdoing. It notes that the four agencies “pledged today to uphold America’s commitment to the core principles of fairness, equality, and justice” as emerging automated systems, including AI, “become increasingly common in our daily lives – impacting civil rights, fair competition, consumer protection, and equal opportunity.” The release adds that the agencies have “resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.”

The FTC’s references to ”fairness” and “fair competition” by implication allude to the fatally flawed November 2022 FTC Policy Statement on Unfair Methods of Competition (UMC). That policy statement has been roundly criticized (see the thoughtful critiques in the Truth on the Market symposium on the UMC statement) for rejecting the venerable consumer-welfare standard that had long guided FTC competition-enforcement policy, and replacing it with subjective notions of “unfair” conduct that could arbitrarily be invoked by the Commission to attack any conduct it found distasteful. (See then-Commissioner Christine Wilson’s dissenting statement.) Such an approach undermines the rule of law, ignores efficiencies, promotes uncertainty, and thereby harmfully interferes with welfare-promoting business conduct.

The specter of arbitrary FTC challenges to AI-related competitive practices that are misunderstood by the commission is obvious. Arbitrary legal attacks on AI practices on dubious subjective grounds could forestall a substantial amount of welfare-generating innovation in the AI space. This would reduce economic wealth creation and harm American technological progress in AI, in addition to weakening the U.S. efforts to prevent China from becoming dominant in this key realm (see here for a discussion of the U.S.-China AI rivalry).

The statement’s announcement that the agencies intend “to monitor the development and use of automated systems” is likewise troublesome. In the FTC’s case, it suggests a potential interest in deciding what forms of AI “development and use” are appropriate. Although rulemaking is not mentioned, the threat of litigation being brought by one or more of the agencies against certain disfavored AI implementations is real.

In particular, the threat of FTC UMC investigations and prosecutions could shape the nature of AI research by directing it away from innovations that the commission dislikes. This would be a form of “regulation by enforcement oversight” that could substantially slow progress in AI and thereby reduce economic growth.

The joint statement reinforces this problematic reading of the FTC’s press release. It stresses the FTC’s finding that:

AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.

The FTC, however, lacks a general statutory authority to combat “discrimination,” and its authority to attack forms of commercial surveillance likewise is highly dubious. The FTC’s proposed commercial surveillance and data security rulemaking, for example, flunks cost-benefit analysis and has other flaws that would prevent it from passing legal muster; see more here.

The notion that the FTC may challenge AI innovations it disfavors by bringing new questionable “discrimination” suits, and by concocting legally indefensible rule-based surveillance and data-security obligations, is a source of serious concern. As in the case of the UMC policy statement, the FTC would be taking novel actions beyond the scope of its congressionally granted authorities. Even if the courts eventually rejected such FTC initiatives, the costs reflected in foregone welfare-enhancing improvements in AI capabilities would be considerable.

The joint statement’s discussion of the CFPB, EEOC, and the DOJ Civil Rights Division less obviously supports the proposition that those agencies will be encouraged to act beyond their statutory mandates. It is notable, however, that various commentators have raised concerns about regulatory overreach by these three entities; with regard to the CFPB, for example, see here, here, and here.

Nevertheless, it is concerning that the administration would assign high priority to oversight of AI—an area of enormous technological and economic potential—to agencies that are concerned primarily with civil-rights issues and with consumer protection in the realm of financial services. The potential for regulatory mission creep that would harm American AI development and the dynamic competition it sparks is obvious.

Conclusion

The joint statement on AI and automated systems should be seen as a yellow (if not a red) warning flag that Biden administration efforts to micromanage AI development may be in the works. Particular attention should focus on the FTC, which has the potential to seriously undermine beneficial AI development through ill-conceived litigation and regulatory initiatives.

This is a serious matter. AI is of major consequence in the global political economy, particularly given China’s interest in the field. One can only hope that the FTC and the Biden administration will keep this sober reality in mind before they gin up new misguided forms of regulatory interference in the evolution of AI.

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

Under a draft “adequacy” decision unveiled today by the European Commission, data-privacy and security commitments made by the United States in an October executive order signed by President Joe Biden were found to comport with the EU’s General Data Protection Regulation (GDPR). If adopted, the decision would provide a legal basis for flows of personal data between the EU and the United States.

This is a welcome development, as some national data-protection authorities in the EU have begun to issue serious threats to stop U.S.-owned data-related service providers from offering services to Europeans. Pending more detailed analysis, I offer some preliminary thoughts here.

Decision Responds to the New U.S. Data-Privacy Framework

The Commission’s decision follows the changes to U.S. policy introduced by Biden’s Oct. 7 executive order. In its July 2020 Schrems II judgment, the EU Court of Justice (CJEU) invalidated the prior adequacy decision on grounds that EU citizens lacked sufficient redress under U.S. law and that U.S. law was not equivalent to “the minimum safeguards” of personal data protection under EU law. The new executive order introduced redress mechanisms that include creating a civil-liberties-protection officer in the Office of the Director of National Intelligence (DNI), as well as a new Data Protection Review Court (DPRC). The DPRC is proposed as an independent review body that will make decisions that are binding on U.S. intelligence agencies.

The old framework had sparked concerns about the independence of the DNI’s ombudsperson, and what was seen as insufficient safeguards against external pressures that individual could face, including the threat of removal. Under the new framework, the independence and binding powers of the DPRC are grounded in regulations issued by the U.S. Attorney General.

To address concerns about the necessity and proportionality of U.S. signals-intelligence activities, the executive order also defines the “legitimate objectives” in pursuit of which such activities can be conducted. These activities would, according to the order, be conducted with the goal of “achieving a proper balance between the importance of the validated intelligence priority being advanced and the impact on the privacy and civil liberties of all persons, regardless of their nationality or wherever they might reside.”

Will the Draft Decision Satisfy the CJEU?

With this draft decision, the European Commission announced it has favorably assessed the executive order’s changes to the U.S. data-protection framework, which apply to foreigners from friendly jurisdictions (presumed to include the EU). If the Commission formally adopts an adequacy decision, however, the decision is certain to be challenged before the CJEU by privacy advocates. In my preliminary analysis after Biden signed the executive order, I summarized some of the concerns raised regarding two aspects relevant to the finding of adequacy: proportionality of data collection and availability of effective redress.

Opponents of granting an adequacy decision tend to rely on an assumption that a finding of adequacy requires virtually identical substantive and procedural privacy safeguards as required within the EU. As noted by the European Commission in the draft decision, this position is not well-supported by CJEU case law, which clearly recognizes that only “adequate level” and “essential equivalence” of protection are required from third-party countries under the GDPR.

To date, the CJEU has not had to specify in greater detail precisely what, in their view, these provisions mean. Instead, the Court has been able simply to point to certain features of U.S. law and practice that were significantly below the GDPR standard (e.g., that the official responsible for providing individual redress was not guaranteed to be independent from political pressure). Future legal challenges to a new Commission adequacy decision will most likely require the CJEU to provide more guidance on what “adequate” and “essentially equivalent” mean.

In the draft decision, the Commission carefully considered the features of U.S. law and practice that the Court previously found inadequate under the GDPR. Nearly half of the explanatory part of the decision is devoted to “access and use of personal data transferred from the [EU] by public authorities in the” United States, with the analysis grounded in CJEU’s Schrems II decision. The Commission concludes that, collectively, all U.S. redress mechanisms available to EU persons:

…allow individuals to have access to their personal data, to have the lawfulness of government access to their data reviewed and, if a violation is found, to have such violation remedied, including through the rectification or erasure of their personal data.

The Commission accepts that individuals have access to their personal data processed by U.S. public authorities, but clarifies that this access may be legitimately limited—e.g., by national-security considerations. Unlike some of the critics of the new executive order, the Commission does not take the simplistic view that access to personal data must be guaranteed by the same procedure that provides binding redress, including the Data Protection Review Court. Instead, the Commission accepts that other avenues, like requests under the Freedom of Information Act, may perform that function.

Overall, the Commission presents a sophisticated, yet uncynical, picture of U.S. law and practice. The lack of cynicism, e.g., about the independence of the DPRC adjudicative process, will undoubtedly be seen by some as naïve and unrealistic, even if the “realism” in this case is based on speculations of what might happen (e.g., secret changes to U.S. policy), rather than evidence. Given the changes adopted by the U.S. government, the key question for the CJEU will be whether to follow the Commission’s approach or that of the activists.

What Happens Next?

The draft adequacy decision will now be scrutinized by EU and national officials. It remains to be seen what will be the collective recommendation of the European Data Protection Board and of the representatives of EU national governments, but there are signs that some domestic data-protection authorities recognize that a finding of adequacy may be appropriate (see, e.g., the opinion from the Hamburg authority).

It is also likely that a significant portion of the European Parliament will be highly critical of the decision, even to the extent of recommending not to adopt it. Importantly, however, none of the consulted bodies have formal power to bind the European Commission on this question. The whole process is expected to take at least several months.

European Union officials insist that the executive order President Joe Biden signed Oct. 7 to implement a new U.S.-EU data-privacy framework must address European concerns about U.S. agencies’ surveillance practices. Awaited since March, when U.S. and EU officials reached an agreement in principle on a new framework, the order is intended to replace an earlier data-privacy framework that was invalidated in 2020 by the Court of Justice of the European Union (CJEU) in its Schrems II judgment.

This post is the first in what will be a series of entries examining whether the new framework satisfies the requirements of EU law or, as some critics argue, whether it does not. The critics include Max Schrems’ organization NOYB (for “none of your business”), which has announced that it “will likely bring another challenge before the CJEU” if the European Commission officially decides that the new U.S. framework is “adequate.” In this introduction, I will highlight the areas of contention based on NOYB’s “first reaction.”

The overarching legal question that the European Commission (and likely also the CJEU) will need to answer, as spelled out in the Schrems II judgment, is whether the United States “ensures an adequate level of protection for personal data essentially equivalent to that guaranteed in the European Union by the GDPR, read in the light of Articles 7 and 8 of the [EU Charter of Fundamental Rights]” Importantly, as Theodore Christakis, Kenneth Propp, and Peter Swire point out, “adequate level” and “essential equivalence” of protection do not necessarily mean identical protection, either substantively or procedurally. The precise degree of flexibility remains an open question, however, and one that the EU Court may need to clarify to a much greater extent.

Proportionality and Bulk Data Collection

Under Article 52(1) of the EU Charter of Fundamental Rights, restrictions of the right to privacy must meet several conditions. They must be “provided for by law” and “respect the essence” of the right. Moreover, “subject to the principle of proportionality, limitations may be made only if they are necessary” and meet one of the objectives recognized by EU law or “the need to protect the rights and freedoms of others.”

As NOYB has acknowledged, the new executive order supplemented the phrasing “as tailored as possible” present in 2014’s Presidential Policy Directive on Signals Intelligence Activities (PPD-28) with language explicitly drawn from EU law: mentions of the “necessity” and “proportionality” of signals-intelligence activities related to “validated intelligence priorities.” But NOYB counters:

However, despite changing these words, there is no indication that US mass surveillance will change in practice. So-called “bulk surveillance” will continue under the new Executive Order (see Section 2 (c)(ii)) and any data sent to US providers will still end up in programs like PRISM or Upstream, despite of the CJEU declaring US surveillance laws and practices as not “proportionate” (under the European understanding of the word) twice.

It is true that the Schrems II Court held that U.S. law and practices do not “[correlate] to the minimum safeguards resulting, under EU law, from the principle of proportionality.” But it is crucial to note the specific reasons the Court gave for that conclusion. Contrary to what NOYB suggests, the Court did not simply state that bulk collection of data is inherently disproportionate. Instead, the reasons it gave were that “PPD-28 does not grant data subjects actionable rights before the courts against the US authorities” and that, under Executive Order 12333, “access to data in transit to the United States [is possible] without that access being subject to any judicial review.”

CJEU case law does not support the idea that bulk collection of data is inherently disproportionate under EU law; bulk collection may be proportionate, taking into account the procedural safeguards and the magnitude of interests protected in a given case. (For another discussion of safeguards, see the CJEU’s decision in La Quadrature du Net.) Further complicating the legal analysis here is that, as mentioned, it is far from obvious that EU law requires foreign countries offer the same procedural or substantive safeguards that are applicable within the EU.

Effective Redress

The Court’s Schrems II conclusion therefore primarily concerns the effective redress available to EU citizens against potential restrictions of their right to privacy from U.S. intelligence activities. The new two-step system proposed by the Biden executive order includes creation of a Data Protection Review Court (DPRC), which would be an independent review body with power to make binding decisions on U.S. intelligence agencies. In a comment pre-dating the executive order, Max Schrems argued that:

It is hard to see how this new body would fulfill the formal requirements of a court or tribunal under Article 47 CFR, especially when compared to ongoing cases and standards applied within the EU (for example in Poland and Hungary).

This comment raises two distinct issues. First, Schrems seems to suggest that an adequacy decision can only be granted if the available redress mechanism satisfies the requirements of Article 47 of the Charter. But this is a hasty conclusion. The CJEU’s phrasing in Schrems II is more cautious:

…Article 47 of the Charter, which also contributes to the required level of protection in the European Union, compliance with which must be determined by the Commission before it adopts an adequacy decision pursuant to Article 45(1) of the GDPR

In arguing that Article 47 “also contributes to the required level of protection,” the Court is not saying that it determines the required level of protection. This is potentially significant, given that the standard of adequacy is “essential equivalence,” not that it be procedurally and substantively identical. Moreover, the Court did not say that the Commission must determine compliance with Article 47 itself, but with the “required level of protection” (which, again, must be “essentially equivalent”).

Second, there is the related but distinct question of whether the redress mechanism is effective under the applicable standard of “required level of protection.” Christakis, Propp, and Swire offered a helpful analysis suggesting that it is, considering the proposed DPRC’s independence, effective investigative powers,  and authority to issue binding determinations. I will offer a more detailed analysis of this point in future posts.

Finally, NOYB raised a concern that “judgment by ‘Court’ [is] already spelled out in Executive Order.” This concern seems to be based on the view that a decision of the DPRC (“the judgment”) and what the DPRC communicates to the complainant are the same thing. Or in other words, that legal effects of a DPRC decision are exhausted by providing the individual with the neither-confirm-nor-deny statement set out in Section 3 of the executive order. This is clearly incorrect: the DPRC has power to issue binding directions to intelligence agencies. The actual binding determinations of the DPRC are not predetermined by the executive order, only the information to be provided to the complainant is.

What may call for closer consideration are issues of access to information and data. For example, in La Quadrature du Net, the CJEU looked at the difficult problem of notification of persons whose data has been subject to state surveillance, requiring individual notification “only to the extent that and as soon as it is no longer liable to jeopardise” the law-enforcement tasks in question. Given the “essential equivalence” standard applicable to third-country adequacy assessments, however, it does not automatically follow that individual notification is required in that context.

Moreover, it also does not necessarily follow that adequacy requires that EU citizens have a right to access the data processed by foreign government agencies. The fact that there are significant restrictions on rights to information and to access in some EU member states, though not definitive (after all, those countries may be violating EU law), may be instructive for the purposes of assessing the adequacy of data protection in a third country, where EU law requires only “essential equivalence.”

Conclusion

There are difficult questions of EU law that the European Commission will need to address in the process of deciding whether to issue a new adequacy decision for the United States. It is also clear that an affirmative decision from the Commission will be challenged before the CJEU, although the arguments for such a challenge are not yet well-developed. In future posts I will provide more detailed analysis of the pivotal legal questions. My focus will be to engage with the forthcoming legal analyses from Schrems and NOYB and from other careful observers.

With just a week to go until the U.S. midterm elections, which potentially herald a change in control of one or both houses of Congress, speculation is mounting that congressional Democrats may seek to use the lame-duck session following the election to move one or more pieces of legislation targeting the so-called “Big Tech” companies.

Gaining particular notice—on grounds that it is the least controversial of the measures—is S. 2710, the Open App Markets Act (OAMA). Introduced by Sen. Richard Blumenthal (D-Conn.), the Senate bill has garnered 14 cosponsors: exactly seven Republicans and seven Democrats. It would, among other things, force certain mobile app stores and operating systems to allow “sideloading” and open their platforms to rival in-app payment systems.

Unfortunately, even this relatively restrained legislation—at least, when compared to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act or the European Union’s Digital Markets Act (DMA)—is highly problematic in its own right. Here, I will offer seven major questions the legislation leaves unresolved.

1.     Are Quantitative Thresholds a Good Indicator of ‘Gatekeeper Power’?

It is no secret that OAMA has been tailor-made to regulate two specific app stores: Android’s Google Play Store and Apple’s Apple App Store (see here, here, and, yes, even Wikipedia knows it).The text makes this clear by limiting the bill’s scope to app stores with more than 50 million users, a threshold that only Google Play and the Apple App Store currently satisfy.

However, purely quantitative thresholds are a poor indicator of a company’s potential “gatekeeper power.” An app store might have much fewer than 50 million users but cater to a relevant niche market. By the bill’s own logic, why shouldn’t that app store likewise be compelled to be open to competing app distributors? Conversely, it may be easy for users of very large app stores to multi-home or switch seamlessly to competing stores. In either case, raw user data paints a distorted picture of the market’s realities.

As it stands, the bill’s thresholds appear arbitrary and pre-committed to “disciplining” just two companies: Google and Apple. In principle, good laws should be abstract and general and not intentionally crafted to apply only to a few select actors. In OAMA’s case, the law’s specific thresholds are also factually misguided, as purely quantitative criteria are not a good proxy for the sort of market power the bill purportedly seeks to curtail.

2.     Why Does the Bill not Apply to all App Stores?

Rather than applying to app stores across the board, OAMA targets only those associated with mobile devices and “general purpose computing devices.” It’s not clear why.

For example, why doesn’t it cover app stores on gaming platforms, such as Microsoft’s Xbox or Sony’s PlayStation?

Source: Visual Capitalist

Currently, a PlayStation user can only buy digital games through the PlayStation Store, where Sony reportedly takes a 30% cut of all sales—although its pricing schedule is less transparent than that of mobile rivals such as Apple or Google.

Clearly, this bothers some developers. Much like Epic Games CEO Tim Sweeney’s ongoing crusade against the Apple App Store, indie-game publisher Iain Garner of Neon Doctrine recently took to Twitter to complain about Sony’s restrictive practices. According to Garner, “Platform X” (clearly PlayStation) charges developers up to $25,000 and 30% of subsequent earnings to give games a modicum of visibility on the platform, in addition to requiring them to jump through such hoops as making a PlayStation-specific trailer and writing a blog post. Garner further alleges that Sony severely circumscribes developers’ ability to offer discounts, “meaning that Platform X owners will always get the worst deal!” (see also here).

Microsoft’s Xbox Game Store similarly takes a 30% cut of sales. Presumably, Microsoft and Sony both have the same type of gatekeeper power in the gaming-console market that Apple and Google are said to have on their respective platforms, leading to precisely those issues that OAMA ostensibly purports to combat. Namely, that consumers are not allowed to choose alternative app stores through which to buy games on their respective consoles, and developers must acquiesce to Sony’s and Microsoft’s terms if they want their games to reach those players.

More broadly, dozens of online platforms also charge commissions on the sales made by their creators. To cite but a few: OnlyFans takes a 20% cut of sales; Facebook gets 30% of the revenue that creators earn from their followers; YouTube takes 45% of ad revenue generated by users; and Twitch reportedly rakes in 50% of subscription fees.

This is not to say that all these services are monopolies that should be regulated. To the contrary, it seems like fees in the 20-30% range are common even in highly competitive environments. Rather, it is merely to observe that there are dozens of online platforms that demand a percentage of the revenue that creators generate and that prevent those creators from bypassing the platform. As well they should, after all, because creating and improving a platform is not free.

It is nonetheless difficult to see why legislation regulating online marketplaces should focus solely on two mobile app stores. Ultimately, the inability of OAMA’s sponsors to properly account for this carveout diminishes the law’s credibility.

3.     Should Picking Among Legitimate Business Models Be up to Lawmakers or Consumers?

“Open” and “closed” platforms posit two different business models, each with its own advantages and disadvantages. Some consumers may prefer more open platforms because they grant them more flexibility to customize their mobile devices and operating systems. But there are also compelling reasons to prefer closed systems. As Sam Bowman observed, narrowing choice through a more curated system frees users from having to research every possible option every time they buy or use some product. Instead, they can defer to the platform’s expertise in determining whether an app or app store is trustworthy or whether it contains, say, objectionable content.

Currently, users can choose to opt for Apple’s semi-closed “walled garden” iOS or Google’s relatively more open Android OS (which OAMA wants to pry open even further). Ironically, under the pretext of giving users more “choice,” OAMA would take away the possibility of choice where it matters the most—i.e., at the platform level. As Mikolaj Barczentewicz has written:

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS).

This obviates the nuances between the two and pushes Android and iOS to converge around a single model. But if consumers unequivocally preferred open platforms, Apple would have no customers, because everyone would already be on Android.

Contrary to regulators’ simplistic assumptions, “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play between these two perfectly valid business models that belie simplistic characterizations of one being inherently superior to the other.

It is debatable whether courts, regulators, or legislators are well-situated to resolve these complex tradeoffs by substituting businesses’ product-design decisions and consumers’ revealed preferences with their own. After all, if regulators had such perfect information, we wouldn’t need markets or competition in the first place.

4.     Does OAMA Account for the Security Risks of Sideloading?

Platforms retaining some control over the apps or app stores allowed on their operating systems bolsters security, as it allows companies to weed out bad players.

Both Apple and Google do this, albeit to varying degrees. For instance, Android already allows sideloading and third-party in-app payment systems to some extent, while Apple runs a tighter ship. However, studies have shown that it is precisely the iOS “walled garden” model which gives it an edge over Android in terms of privacy and security. Even vocal Apple critic Tim Sweeney recently acknowledged that increased safety and privacy were competitive advantages for Apple.

The problem is that far-reaching sideloading mandates—such as the ones contemplated under OAMA—are fundamentally at odds with current privacy and security capabilities (see here and here).

OAMA’s defenders might argue that the law does allow covered platforms to raise safety and security defenses, thus making the tradeoffs between openness and security unnecessary. But the bill places such stringent conditions on those defenses that platform operators will almost certainly be deterred from risking running afoul of the law’s terms. To invoke the safety and security defenses, covered companies must demonstrate that provisions are applied on a “demonstrably consistent basis”; are “narrowly tailored and could not be achieved through less discriminatory means”; and are not used as a “pretext to exclude or impose unnecessary or discriminatory terms.”

Implementing these stringent requirements will drag enforcers into a micromanagement quagmire. There are thousands of potential spyware, malware, rootkit, backdoor, and phishing (to name just a few) software-security issues—all of which pose distinct threats to an operating system. The Federal Trade Commission (FTC) and the federal courts will almost certainly struggle to control the “consistency” requirement across such varied types.

Likewise, OAMA’s reference to “least discriminatory means” suggests there is only one valid answer to any given security-access tradeoff. Further, depending on one’s preferred balance between security and “openness,” a claimed security risk may or may not be “pretextual,” and thus may or may not be legal.

Finally, the bill text appears to preclude the possibility of denying access to a third-party app or app store for reasons other than safety and privacy. This would undermine Apple’s and Google’s two-tiered quality-control systems, which also control for “objectionable” content such as (child) pornography and social engineering. 

5.     How Will OAMA Safeguard the Rights of Covered Platforms?

OAMA is also deeply flawed from a procedural standpoint. Most importantly, there is no meaningful way to contest the law’s designation as “covered company,” or the harms associated with it.

Once a company is “covered,” it is presumed to hold gatekeeper power, with all the associated risks for competition, innovation, and consumer choice. Remarkably, this presumption does not admit any qualitative or quantitative evidence to the contrary. The only thing a covered company can do to rebut the designation is to demonstrate that it, in fact, has fewer than 50 million users.

By preventing companies from showing that they do not hold the kind of gatekeeper power that harms competition, decreases innovation, raises prices, and reduces choice (the bill’s stated objectives), OAMA severely tilts the playing field in the FTC’s favor. Even the EU’s enforcer-friendly DMA incorporated a last-minute amendment allowing firms to dispute their status as “gatekeepers.” While this defense is not perfect (companies cannot rely on the same qualitative evidence that the European Commission can use against them), at least gatekeeper status can be contested under the DMA.

6.     Should Legislation Protect Competitors at the Expense of Consumers?

Like most of the new wave of regulatory initiatives against Big Tech (but unlike antitrust law), OAMA is explicitly designed to help competitors, with consumers footing the bill.

For example, OAMA prohibits covered companies from using or combining nonpublic data obtained from third-party apps or app stores operating on their platforms in competition with those third parties. While this may have the short-term effect of redistributing rents away from these platforms and toward competitors, it risks harming consumers and third-party developers in the long run.

Platforms’ ability to integrate such data is part of what allows them to bring better and improved products and services to consumers in the first place. OAMA tacitly admits this by recognizing that the use of nonpublic data grants covered companies a competitive advantage. In other words, it allows them to deliver a product that is better than competitors’.

Prohibiting self-preferencing raises similar concerns. Why wouldn’t a company that has invested billions in developing a successful platform and ecosystem not give preference to its own products to recoup some of that investment? After all, the possibility of exercising some control over downstream and adjacent products is what might have driven the platform’s development in the first place. In other words, self-preferencing may be a symptom of competition, and not the absence thereof. Third-party companies also would have weaker incentives to develop their own platforms if they can free-ride on the investments of others. And platforms that favor their own downstream products might simply be better positioned to guarantee their quality and reliability (see here and here).

In all of these cases, OAMA’s myopic focus on improving the lot of competitors for easy political points will upend the mobile ecosystems from which both users and developers derive significant benefit.

7.     Shouldn’t the EU Bear the Risks of Bad Tech Regulation?

Finally, U.S. lawmakers should ask themselves whether the European Union, which has no tech leaders of its own, is really a model to emulate. Today, after all, marks the day the long-awaited Digital Markets Act— the EU’s response to perceived contestability and fairness problems in the digital economy—officially takes effect. In anticipation of the law entering into force, I summarized some of the outstanding issues that will define implementation moving forward in this recent tweet thread.

We have been critical of the DMA here at Truth on the Market on several factual, legal, economic, and procedural grounds. The law’s problems range from it essentially being a tool to redistribute rents away from platforms and to third-parties, despite it being unclear why the latter group is inherently more deserving (Pablo Ibañez Colomo has raised a similar point); to its opacity and lack of clarity, a process that appears tilted in the Commission’s favor; to the awkward way it interacts with EU competition law, ignoring the welfare tradeoffs between the models it seeks to impose and perfectly valid alternatives (see here and here); to its flawed assumptions (see, e.g., here on contestability under the DMA); to the dubious legal and economic value of the theory of harm known as  “self-preferencing”; to the very real possibility of unintended consequences (e.g., in relation to security and interoperability mandates).

In other words, that the United States lags the EU in seeking to regulate this area might not be a bad thing, after all. Despite the EU’s insistence on being a trailblazing agenda-setter at all costs, the wiser thing in tech regulation might be to remain at a safe distance. This is particularly true when one considers the potentially large costs of legislative missteps and the difficulty of recalibrating once a course has been set.

U.S. lawmakers should take advantage of this dynamic and learn from some of the Old Continent’s mistakes. If they play their cards right and take the time to read the writing on the wall, they might just succeed in averting antitrust’s uncertain future.

Faithful and even occasional readers of this roundup might have noticed a certain temporal discontinuity between the last post and this one. The inimitable Gus Hurwitz has passed the scrivener’s pen to me, a recent refugee from the Federal Trade Commission (FTC), and the roundup is back in business. Any errors going forward are mine. Going back, blame Gus.

Commissioner Noah Phillips departed the FTC last Friday, leaving the Commission down a much-needed advocate for consumer welfare and the antitrust laws as they are, if not as some wish they were. I recommend the reflections posted by Commissioner Christine S. Wilson and my fellow former FTC Attorney Advisor Alex Okuliar. Phillips collaborated with his fellow commissioners on matters grounded in the law and evidence, but he wasn’t shy about crying frolic and detour when appropriate.

The FTC without Noah is a lesser place. Still, while it’s not always obvious, many able people remain at the Commission and some good solid work continues. For example, FTC staff filed comments urging New York State to reject a Certificate of Public Advantage (“COPA”) application submitted by SUNY Upstate Health System and Crouse Medical. The staff’s thorough comments reflect investigation of the proposed merger, recent research, and the FTC’s long experience with COPAs. In brief, the staff identified anticompetitive rent-seeking for what it is. Antitrust exemptions for health-care providers tend to make health care worse, but more expensive. Which is a corollary to the evergreen truth that antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.

More Good News from the Commission

On Sept. 30, a unanimous Commission announced that an independent physician association in New Mexico had settled allegations that it violated a 2005 consent order. The allegations? Roughly 400 physicians—independent competitors—had engaged in price fixing, violating both the 2005 order and the Sherman Act. As the concurring statement of Commissioners Phillips and Wilson put it, the new order “will prevent a group of doctors from allegedly getting together to negotiate… higher incomes for themselves and higher costs for their patients.” Oddly, some have chastised the FTC for bringing the action as anti-labor. But the IPA is a regional “must-have” for health plans and a dominant provider to consumers, including patients, who might face tighter budget constraints than the median physician

Peering over the rims of the rose-colored glasses, my gaze turns to Meta. In July, the FTC sued to block Meta’s proposed acquisition of Within Unlimited (and its virtual-reality exercise app, Supernatural). Gus wrote about it with wonder, noting reports that the staff had recommended against filing, only to be overruled by the chair.

Now comes October and an amended complaint. The amended complaint is even weaker than the opening salvo. Now, the FTC alleges that the acquisition would eliminate potential competition from Meta in a narrower market, VR-dedicated fitness apps, by “eliminating any probability that Meta would enter the market through alternative means absent the Proposed Acquisition, as well as eliminating the likely and actual beneficial influence on existing competition that results from Meta’s current position, poised on the edge of the market.”

So what if Meta were to abandon the deal—as the FTC wants—but not enter on its own? Same effect, but the FTC cannot seriously suggest that Meta has a positive duty to enter the market. Is there a jurisdiction (or a planet) where a decision to delay or abandon entry would be unlawful unilateral conduct? Suppose instead that Meta enters, with virtual-exercise guns blazing, much to the consternation of firms actually in the market, which might complain about it. Then what? Would the Commission cheer or would it allege harm to nascent competition, or perhaps a novel vertical theory? And by the way, how poised is Meta, given no competing product in late-stage development? Would the FTC prefer that Meta buy a different competitor? Should the overworked staff commence Meta’s due diligence?

Potential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established. But this is a nascent market in a large, highly dynamic, and innovative industry. The competitive landscape a few years down the road is anyone’s guess. More speculation: the staff was right all along. For more, see Dirk Auer’s or Geoffrey Manne’s threads on the amended complaint.

When It Rains It Pours Regulations

On Aug. 22, the FTC published an advance notice of proposed rulemaking (ANPR) to consider the potential regulation of “commercial surveillance and data security” under its Section 18 authority. Shortly thereafter, they announced an Oct. 20 open meeting with three more ANPRs on the agenda.

First, on the advance notice: I’m not sure what they mean by “commercial surveillance.” The term doesn’t appear in statutory law, or in prior FTC enforcement actions. It sounds sinister and, surely, it’s an intentional nod to Shoshana Zuboff’s anti-tech polemic “The Age of Surveillance Capitalism.” One thing is plain enough: the proffered definition is as dramatically sweeping as it is hopelessly vague. The Commission seems to be contemplating a general data regulation of some sort, but we don’t know what sort. They don’t say or even sketch a possible rule. That’s a problem for the FTC, because the law demands that the Commission state its regulatory objectives, along with regulatory alternatives under consideration, in the ANPR itself. If they get to an NPRM, they are required to describe a proposed rule with specificity.

What’s clear is that the ANPR takes a dim view of much of the digital economy. And while the Commission has considerable experience in certain sorts of privacy and data security matters, the ANPR hints at a project extending well past that experience. Commissioners Phillips and Wilson dissented for good and overlapping reasons. Here’s a bit from the Phillips dissent:

When adopting regulations, clarity is a virtue. But the only thing clear in the ANPR is a rather dystopic view of modern commerce….I cannot support an ANPR that is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate….It’s a naked power grab.

Be sure to read the bonus material in the Federal Register—supporting statements from Chair Lina Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya, and dissenting statements from Commissioners Phillips and Wilson. Chair Khan breezily states that “the questions we ask in the ANPR and the rules we are empowered to issue may be consequential, but they do not implicate the ‘major questions doctrine.’” She’s probably half right: the questions do not violate the Constitution. But she’s probably half wrong too.

For more, see ICLE’s Oct. 20 panel discussion and the executive summary to our forthcoming comments to the Commission.

But wait, there’s more! There were three additional ANPRs on the Commission’s Oct. 20 agenda. So that’s four and counting. Will there be a proposed rule on non-competes? Gig workers? Stay tuned. For now, note that rules are not self-enforcing, and that the chair has testified to Congress that the Commission is strapped for resources and struggling to keep up with its statutory mission. Are more regulations an odd way to ask Congress for money? Thus far, there’s no proposed rule on gig workers, but there was a Policy Statement on Enforcement Related to Gig Workers.. For more on that story, see Alden Abbott’s TOTM post.

Laws, Like People, Have Their Limits

Read Phillips’s parting dissent in Passport Auto Group, where the Commission combined legitimate allegations with an unhealthy dose of overreach:

The language of the unfairness standard has given the FTC the flexibility to combat new threats to consumers that accompany the development of new industries and technologies. Still, there are limits to the Commission’s unfairness authority. Because this complaint includes an unfairness count that aims to transform Section 5 into an undefined discrimination statute, I respectfully dissent.”

Right. Three cheers for effective enforcement of the focused antidiscrimination laws enacted by Congress by the agencies actually charged to enforce those laws. And to equal protection. And three more, at least, for a little regulatory humility, if we find it.

The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

Protectionism Under the Guise of Cybersecurity

Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

Do Security and Privacy Require Protectionism?

As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

Digital Sovereignty as Industrial Policy

Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

Effect on startups

Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

Direct investment

Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

Conclusion

Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

[This post is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission’s (FTC) Aug. 22 Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security (ANPRM) is breathtaking in its scope. For an overview summary, see this Aug. 11 FTC press release.

In their dissenting statements opposing ANPRM’s release, Commissioners Noah Phillips and Christine Wilson expertly lay bare the notice’s serious deficiencies. Phillips’ dissent stresses that the ANPRM illegitimately arrogates to the FTC legislative power that properly belongs to Congress:

[The [A]NPRM] recast[s] the Commission as a legislature, with virtually limitless rulemaking authority where personal data are concerned. It contemplates banning or regulating conduct the Commission has never once identified as unfair or deceptive. At the same time, the ANPR virtually ignores the privacy and security concerns that have animated our [FTC] enforcement regime for decades. … [As such, the ANPRM] is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate. That’s not “democratizing” the FTC or using all “the tools in the FTC’s toolbox.” It’s a naked power grab.

Wilson’s complementary dissent critically notes that the 2021 changes to FTC rules of practice governing consumer-protection rulemaking decrease opportunities for public input and vest significant authority solely with the FTC chair. She also echoed Phillips’ overarching concern with FTC overreach (footnote citations omitted):

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. … [R]egulatory and enforcement overreach increasingly has drawn sharp criticism from courts. Recent Supreme Court decisions indicate FTC rulemaking overreach likely will not fare well when subjected to judicial review.

Phillips and Wilson’s warnings are fully warranted. The ANPRM contemplates a possible Magnuson-Moss rulemaking pursuant to Section 18 of the FTC Act,[1] which authorizes the commission to promulgate rules dealing with “unfair or deceptive acts or practices.” The questions that the ANPRM highlights center primarily on concerns of unfairness.[2] Any unfairness-related rulemaking provisions eventually adopted by the commission will have to satisfy a strict statutory cost-benefit test that defines “unfair” acts, found in Section 5(n) of the FTC Act. As explained below, the FTC will be hard-pressed to justify addressing most of the ANPRM’s concerns in Section 5(n) cost-benefit terms.

Discussion

The requirements imposed by Section 5(n) cost-benefit analysis

Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority … to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In other words, a practice may be condemned as unfair only if it causes or is likely to cause “(1) substantial injury to consumers (2) which is not reasonably avoidable by consumers themselves and (3) not outweighed by countervailing benefits to consumers or to competition.”

This is a demanding standard. (For scholarly analyses of the standard’s legal and economic implications authored by former top FTC officials, see here, here, and here.)

First, the FTC must demonstrate that a practice imposes a great deal of harm on consumers, which they could not readily have avoided. This requires detailed analysis of the actual effects of a particular practice, not mere theoretical musings about possible harms that may (or may not) flow from such practice. Actual effects analysis, of course, must be based on empiricism: consideration of hard facts.

Second, assuming that this formidable hurdle is overcome, the FTC must then acknowledge and weigh countervailing welfare benefits that might flow from such a practice. In addition to direct consumer-welfare benefits, other benefits include “benefits to competition.” Those may include business efficiencies that reduce a firm’s costs, because such efficiencies are a driver of vigorous competition and, thus, of long-term consumer welfare. As the Organisation for Economic Co-operation and Development has explained (see OECD Background Note on Efficiencies, 2012, at 14), dynamic and transactional business efficiencies are particularly important in driving welfare enhancement.

In sum, under Section 5(n), the FTC must show actual, fact-based, substantial harm to consumers that they could not have escaped, acting reasonably. The commission must also demonstrate that such harm is not outweighed by consumer and (procompetitive) business-efficiency benefits. What’s more, Section 5(n) makes clear that the FTC cannot “pull a rabbit out of a hat” and interject other “public policy” considerations as key factors in the rulemaking  calculus (“[s]uch [other] public policy considerations may not serve as a primary basis for … [a] determination [of unfairness]”).

It ineluctably follows as a matter of law that a Section 18 FTC rulemaking sounding in unfairness must be based on hard empirical cost-benefit assessments, which require data grubbing and detailed evidence-based economic analysis. Mere anecdotal stories of theoretical harm to some consumers that is alleged to have resulted from a practice in certain instances will not suffice.

As such, if an unfairness-based FTC rulemaking fails to adhere to the cost-benefit framework of Section 5(n), it inevitably will be struck down by the courts as beyond the FTC’s statutory authority. This conclusion is buttressed by the tenor of the Supreme Court’s unanimous 2021 opinion in AMG Capital v. FTC, which rejected the FTC’s claim that its statutory injunctive authority included the ability to obtain monetary relief for harmed consumers (see my discussion of this case here).

The ANPRM and Section 5(n)

Regrettably, the tone of the questions posed in the ANPRM indicates a lack of consideration for the constraints imposed by Section 5(n). Accordingly, any future rulemaking that sought to establish “remedies” for many of the theorized abuses found in the ANPRM would stand very little chance of being upheld in litigation.

The Aug. 11 FTC press release cited previously addresses several broad topical sources of harms: harms to consumers; harms to children; regulations; automated systems; discrimination; consumer consent; notice, transparency, and disclosure; remedies; and obsolescence. These categories are chock full of questions that imply the FTC may consider restrictions on business conduct that go far beyond the scope of the commission’s authority under Section 5(n). (The questions are notably silent about the potential consumer benefits and procompetitive efficiencies that may arise from the business practices called here into question.)

A few of the many questions set forth under just four of these topical listings (harms to consumers, harms to children, regulations, and discrimination) are highlighted below, to provide a flavor of the statutory overreach that categorizes all aspects of the ANPRM. Many other examples could be cited. (Phillips’ dissenting statement provides a cogent and critical evaluation of ANPRM questions that embody such overreach.) Furthermore, although there is a short discussion of “costs and benefits” in the ANPRM press release, it is wholly inadequate to the task.

Under the category “harms to consumers,” the ANPRM press release focuses on harm from “lax data security or surveillance practices.” It asks whether FTC enforcement has “adequately addressed indirect pecuniary harms, including potential physical harms, psychological harms, reputational injuries, and unwanted intrusions.” The press release suggests that a rule might consider addressing harms to “different kinds of consumers (e.g., young people, workers, franchisees, small businesses, women, victims of stalking or domestic violence, racial minorities, the elderly) in different sectors (e.g., health, finance, employment) or in different segments or ‘stacks’ of the internet economy.”

These laundry lists invite, at best, anecdotal public responses alleging examples of perceived “harm” falling into the specified categories. Little or no light is likely to be shed on the measurement of such harm, nor on the potential beneficial effects to some consumers from the practices complained of (for example, better targeted ads benefiting certain consumers). As such, a sound Section 5(n) assessment would be infeasible.

Under “harms to children,” the press release suggests possibly extending the limitations of the FTC-administered Children’s Online Privacy Protection Act (COPPA) to older teenagers, thereby in effect rewriting COPPA and usurping the role of Congress (a clear statutory overreach). The press release also asks “[s]hould new rules set out clear limits on personalized advertising to children and teenagers irrespective of parental consent?” It is hard (if not impossible) to understand how this form of overreach, which would displace the supervisory rights of parents (thereby imposing impossible-to-measure harms on them), could be shoe-horned into a defensible Section 5(n) cost-benefit assessment.

Under “regulations,” the press release asks whether “new rules [should] require businesses to implement administrative, technical, and physical data security measures, including encryption techniques, to protect against risks to the security, confidentiality, or integrity of covered data?” Such new regulatory strictures (whose benefits to some consumers appear speculative) would interfere significantly in internal business processes. Specifically, they could substantially diminish the efficiency of business-security measures, diminish business incentives to innovate (for example, in encryption), and reduce dynamic competition among businesses.

Consumers also would be harmed by a related slowdown in innovation. Those costs undoubtedly would be high but hard, if not impossible, to measure. The FTC also asks whether a rule should limit “companies’ collection, use, and retention of consumer data.” This requirement, which would seemingly bypass consumers’ decisions to make their data available, would interfere with companies’ ability to use such data to improve business offerings and thereby enhance consumers’ experiences. Justifying new requirements such as these under Section 5(n) would be well-nigh impossible.

The category “discrimination” is especially problematic. In addressing “algorithmic discrimination,” the ANPRM press release asks whether the FTC should “consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based.” In addition, the press release asks “if the Commission [should] consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities)?”

The FTC cites no statutory warrant for the authority to combat such forms of “discrimination.” It is not a civil-rights agency. It clearly is not authorized to issue anti-discrimination rules dealing with “groups that current law does not recognize as protected from discrimination.” Any such rules, if issued, would be summarily struck down in no uncertain terms by the judiciary, even without regard to Section 5(n).

In addition, given the fact that “economic discrimination” often is efficient (and procompetitive) and may be beneficial to consumer welfare (see, for example, here), more limited economic anti-discrimination rules almost certainly would not pass muster under the Section 5(n) cost-benefit framework.     

Finally, while the ANPRM press release does contain a very short section entitled “costs and benefits,” that section lacks any specific reference to the required Section 5(n) evaluation framework. Phillips’ dissent points out that the ANPRM:

…simply fail[s] to provide the detail necessary for commenters to prepare constructive responses” on cost-benefit analysis. He stresses that the broad nature of requests for commenters’ view on costs and benefits renders the inquiry “not conducive to stakeholders submitting data and analysis that can be compared and considered in the context of a specific rule. … Without specific questions about [the costs and benefits of] business practices and potential regulations, the Commission cannot hope for tailored responses providing a full picture of particular practices.

In other words, the ANPRM does not provide the guidance needed to prompt the sorts of responses that might assist the FTC in carrying out an adequate Section 5(n) cost-benefit analysis.

Conclusion

The FTC would face almost certain defeat in court if it promulgated a broad rule addressing many of the perceived unfairness-based “ills” alluded to in the ANPRM. Moreover, although its requirements would (I believe) not come into effect, such a rule nevertheless would impose major economic costs on society.

Prior to final judicial resolution of its status, the rule would disincentivize businesses from engaging in a variety of data-related practices that enhance business efficiency and benefit many consumers. Furthermore, the FTC resources devoted to developing and defending the rule would not be applied to alternative welfare-enhancing FTC activities—a substantial opportunity cost.

The FTC should take heed of these realities and opt not to carry out a rulemaking based on the ANPRM. It should instead devote its scarce consumer protection resources to prosecuting hard core consumer fraud and deception—and, perhaps, to launching empirical studies into the economic-welfare effects of data security and commercial surveillance practices. Such studies, if carried out, should focus on dispassionate economic analysis and avoid policy preconceptions. (For example, studies involving digital platforms should take note of the existing economic literature, such as a paper indicating that digital platforms have generated enormous consumer-welfare benefits not accounted for in gross domestic product.)

One can only hope that a majority of FTC commissioners will apply common sense and realize that far-flung rulemaking exercises lacking in statutory support are bad for the rule of law, bad for the commission’s reputation, bad for the economy, and bad for American consumers.


[1] The FTC states specifically that it “is issuing this ANPR[M] pursuant to Section 18 of the Federal Trade Commission Act”.

[2] Deceptive practices that might be addressed in a Section 18 trade regulation rule would be subject to the “FTC Policy Statement on Deception,” which states that “the Commission will find deception if there is a representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.” A court reviewing an FTC Section 18 rule focused on “deceptive acts or practices” undoubtedly would consult this Statement, although it is not clear, in light of recent jurisprudential trends, that the court would defer to the Statement’s analysis in rendering an opinion. In any event, questions of deception, which focus on acts or practices that mislead consumers, would in all likelihood have little relevance to the evaluation of any rule that might be promulgated in light of the ANPRM.    

[TOTM: This guest post from Svetlana S. Gans and Natalie Hausknecht of Gibson Dunn is part of Truth on the Market’s continuing FTC UMC Symposium. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.]

The Federal Trade Commission (FTC) launched one of the most ambitious rulemakings in agency history Aug. 11, with its 3-2 vote to initiate Advance Notice of Proposed Rulemaking (ANPRM) on commercial surveillance and data security. The divided vote, which broke down on partisan lines, stands in stark contrast to recent bipartisan efforts on Capitol Hill, particularly on the comprehensive American Data Privacy and Protection Act (ADPPA).  

Although the rulemaking purports to pursue a new “privacy and data security” regime, it targets far more than consumer privacy. The ANPRM lays out a sweeping project to rethink the regulatory landscape governing nearly every facet of the U.S. internet economy, from advertising to anti-discrimination law, and even to labor relations. Any entity that uses the internet (even for internal purposes) is likely to be affected by this latest FTC action, and public participation in the proposed rulemaking will be important to ensure the agency gets it right.

Summary of the ANPRM  

The vague scope of the FTC’s latest ANPRM begins at its title: “Commercial Surveillance and Data Security” Rulemaking. The announcement states the FTC intends to explore rules “cracking down” on the “business of collecting, analyzing, and profiting from information about people.” The ANPRM then defines the scope of “commercial surveillance” to include virtually any data activity. For example, the ANPRM explains that it includes practices used “to set prices, curate newsfeeds, serve advertisements, and conduct research on people’s behavior, among other things.” The ANPRM also goes on to say that it is concerned about practices “outside of the retail consumer setting” that the agency traditionally regulates. Indeed, the ANPRM defines “consumer” to include “businesses and workers, not just individuals who buy or exchange data for retail goods and services.”

Unlike the bipartisan ADPPA, the ANPRM also takes aim at the “consent” model that the FTC has long advocated to ensure consumers make informed choices about their data online. It claims that “consumers may become resigned to” data practices and “have little to no actual control over what happens to their information.” It also suggests that consumers “do not generally understand” data practices, such that their permission could be “meaningful”—making express consumer consent to data practices “irrelevant.”

The ANPRM further lists a disparate set of additional FTC concerns, from “pernicious dark pattern practices” to “lax data security practices” to “sophisticated digital advertising systems” to “stalking apps,” “cyber bullying, cyberstalking, and the distribution of child sexual abuse material,” and the use of “social media” among “kids and teens.” It “finally” wraps up with a reference to “growing reliance on automated systems” that may create “new forms and mechanisms for discrimination” in areas like housing, employment, and healthcare. The issue the agency expresses about these automated systems is with apparent “disparate outcomes” “even when automated systems consider only unprotected consumer traits.”

Having set out these concerns, the ANPRM seeks to justify a new rulemaking via a list of what it describes as “decades” of “consumer data privacy and security” enforcement actions. The rulemaking then requests that the public answer 95 questions, covering many different legal and factual issues. For example, the agency requests the public weigh in on the practices “companies use to surveil consumers,” intangible and unmeasurable “harms” created by such practices, the most harmful practices affecting children and teens, techniques that “manipulate consumers into prolonging online activity,” how the commission should balance costs and benefits from any regulation, biometric data practices, algorithmic errors and disparate impacts, the viability of consumer consent, the opacity of “consumer surveillance practices,” and even potential remedies the agency should consider.  

Commissioner Statements in Support of the ANPR

Every Democratic commissioner issued a separate supporting statement. Chair Lina Khan’s statement justified the rulemaking grounds that the FTC is the “de facto law enforcer in this domain.” She also doubled-down on the decision to address not only consumer privacy, but issues affecting all “opportunities in our economy and society, as well as core civil liberties and civil rights” and described being “especially eager to build a record” related to: the limits of “notice and consent” frameworks, as opposed to withdrawing permission for data collection “in the first place”; how to navigate “information asymmetries” with companies; how to address certain “business models” “premised on” persistent tracking; discrimination in automated processes; and workplace surveillance.   

Commissioner Rebecca Kelly Slaughter’s longer statement more explicitly attacked the agency’s “notice-and-consent regime” as having “failed to protect users.” She expressed hope that the new rules would take on biometric or location tracking, algorithmic decision-making, and lax data security practices as “long overdue.” Commission Slaughter further brushed aside concerns that the rulemaking was inappropriate while Congress considered comprehensive privacy legislation, asserting that the magnitude of the rulemaking was a reason to do it—not shy away. She also expressed interest in data-minimization specifications, discriminatory algorithms, and kids and teens issues.

Commissioner Alvaro Bedoya’s short statement likewise expressed support for acting. However, he noted the public comment period would help the agency “discern whether and how to proceed.” Like his colleagues, he identified his particular interest in “emerging discrimination issues”: the mental health of kids and teens; the protection of non-English speaking communities; and biometric data. On the pending privacy legislation, he noted that:

[ADPPA] is the strongest privacy bill that has ever been this close to passing. I hope it does pass. I hope it passes soon…. This ANPRM will not interfere with that effort. I want to be clear: Should the ADPPA pass, I will not vote for any rule that overlaps with it.

Commissioner Statements Opposed to the ANPRM

Both Republican commissioners published dissents. Commissioner Christine S. Wilson’s urged deference to Congress as it considers a comprehensive privacy law. Yet she also expressed broader concern about the FTC’s recent changes to its Section 18 rulemaking process that “decrease opportunities for public input and vest significant authority for the rulemaking proceedings solely with the Chair” and the unjustified targeting of practices not subject to prior enforcement action. Notably, Commissioner Wilson also worried the rulemaking was unlikely to survive judicial scrutiny, indicating that Chair Khan’s statements give her “no basis to believe that she will seek to ensure that proposed rule provisions fit within the Congressionally circumscribed jurisdiction of the FTC.”  

Commissioner Noah Phillips’ dissent criticized the ANPRM for failing to provide “notice of anything” and thus stripping the public of its participation rights. He argued that the ANPRM’s “myriad” questions appear to be a “mechanism to fish for legal theories that might justify outlandish regulatory ambition outside our jurisdiction.” He further noted that the rulemaking positions the FTC as a legislature to regulate in areas outside of its expertise (e.g., labor law) with potentially disastrous economic costs that it is ill-equipped to understand.

Commissioner Phillips further argued the ANPRM attacks disparate practices based on an “amalgam of cases concerning very different business models and conduct” that cannot show the prevalence of misconduct required for Section 18 rulemaking. He also criticized the FTC for abandoning its own informed-consent model based on paternalistic musings about individuals’ ability to decide for themselves. And finally, he criticized the FTC’s apparent overreach in claiming the mantle of “civil rights enforcer” when it was never given that explicit authority by Congress to declare discrimination or disparate impacts unlawful in this space. 

Implications for Regulated Entities and Others Concerned with Potential Agency Overreach

The sheer breadth of the ANPRM demands the avid attention of potentially regulated entities or those concerned with the FTC’s aggressive rulemaking agenda. The public should seek to meaningfully participate in the rulemaking process to ensure the FTC considers a broad array of viewpoints and has the facts before it necessary to properly define the scope of its own authority and the consequences of any proposed privacy regulation. For example, the FTC may issue a notice of proposed rulemaking defining acts or practices as unfair or deceptive “only where it has reason to believe that the unfair or deceptive acts or practices which are the subject of the proposed rulemaking are prevalent.”(emphasis added).

15 U.S. Code § 57a also states that the FTC may make a determination that unfair or deceptive acts or practices are prevalent only if:  “(A) it has issued cease and desist orders regarding such acts or practices, or (B) any other information available to the Commission indicates a widespread pattern of unfair or deceptive acts or practices.” That means that, under the Magnuson-Moss Section 18 rulemaking that the FTC must use here, the agency must show (1) the prevalence of the practices (2) how they are unfair or deceptive, and (3) the economic effect of the rule, including on small businesses and consumers. Any final regulatory analysis also must assess the rule’s costs and benefits and why it was chosen over alternatives. On each count, effective advocacy supported by empirical and sound economic analysis by the public may prove dispositive.

The FTC may have a particularly difficult time meeting this burden of proof with many of the innocuous (and currently permitted) practices identified in the ANPRM. For example, modern online commerce like automated decision-making is a part of the engine that has powered a decade of innovation, lowered logistical and opportunity costs, and opened up amazing new possibilities for small businesses seeking to serve local consumers and their communities. Commissioner Wilson makes this point well:

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. 

The FTC also may be setting itself on an imminent collision course with the “major questions” doctrine, in particular. On the last day of its term this year, the Supreme Court handed down West Virginia v. Environmental Protection Agency, which applied the “major questions doctrine” to rule that the EPA can’t base its controversial Clean Power Plan on a novel interpretation of a relatively obscure provision of the Clean Air Act. An agency rule of such vast “economic and political significance,” Chief Justice John Roberts wrote, requires “clear congressional authorization.” (See “The FTC Heads for Legal Trouble” by Svetlana Gans and Eugene Scalia.) Parties are likely to argue the same holds true here with regard to the FTC’s potential regulatory extension into areas like anti-discrimination and labor law. If the FTC remains on this aggressive course, any final privacy rulemaking could also be a tempting target for a reinvigorated nondelegation doctrine.  

Some members of Congress also may question the wisdom of the ANPRM venturing into the privacy realm at all right now, a point advanced by several of the commissioners. Shortly after the FTC’s announcement, House Energy and Commerce Committee Chairman Frank Pallone Jr. (D-N.J.) stated:

I appreciate the FTC’s effort to use the tools it has to protect consumers, but Congress has a responsibility to pass comprehensive federal privacy legislation to better equip the agency, and others, to protect consumers to the greatest extent.

Sen. Roger Wicker (R-Miss.), the ranking member on the Senate Commerce Committee and a leading GOP supporter of the bipartisan legislation, likewise said that the FTC’s move helps “underscore the urgency for the House to bring [ADPPA]  to the floor and for the Senate Commerce Committee to advance it through committee.”  

The FTC’s ANPRM will likely have broad implications for the U.S. economy. Stakeholders can participate in the rulemaking in several ways, including registering by Aug. 31 to speak at the FTC’s Sept. 8 public forum. Stakeholders should also consider submitting public comments and empirical evidence within 60-days of the ANPRM’s publication in the Federal Register, and insist that the FTC hold informal hearings as required under the Magnuson-Moss Act.

While the FTC is rightfully the nation’s top consumer cop, an advanced notice of this scope demands active public awareness and participation to ensure the agency gets it right.  

 

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

European Union lawmakers appear close to finalizing a number of legislative proposals that aim to reform the EU’s financial-regulation framework in response to the rise of cryptocurrencies. Prominent within the package are new anti-money laundering and “countering the financing of terrorism” rules (AML/CFT), including an extension of the so-called “travel rule.” The travel rule, which currently applies to wire transfers managed by global banks, would be extended to require crypto-asset service providers to similarly collect and make available details about the originators and beneficiaries of crypto-asset transfers.

This legislative process proceeded with unusual haste in recent months, which partially explains why legal objections to the proposals have not been adequately addressed. The resulting legislation is fundamentally flawed to such an extent that some of its key features are clearly invalid under EU primary (treaty) law and liable to be struck down by the Court of Justice of the European Union (CJEU). 

In this post, I will offer a brief overview of some of the concerns, which I also discuss in this recent Twitter thread. I focus primarily on the travel rule, which—in the light of EU primary law—constitutes a broad and indiscriminate surveillance regime for personal data. This characterization also applies to most of AML/CFT.

The CJEU, the EU’s highest court, established a number of conditions that such legally mandated invasions of privacy must satisfy in order to be valid under EU primary law (the EU Charter of Fundamental Rights). The legal consequences of invalidity are illustrated well by the Digital Rights Ireland judgment, in which the CJEU struck down an entire piece of EU legislation (the Data Retention Directive). Alternatively, the CJEU could decide to interpret EU law as if it complied with primary law, even if that is contrary to the text.

The Travel Rule in the Transfer of Funds Regulation

The EU travel rule is currently contained in the 2015 Wire Transfer Regulation (WTR). But at the end of June, EU legislators reached a likely final deal on its replacement, the Transfer of Funds Regulation (TFR; see the original proposal from July 2021). I focus here on the TFR, but much of the argument also applies to the older WTR now in force. 

The TFR imposes obligations on payment-system providers and providers of crypto-asset transfers (refer to here, collectively, as “service providers”) to collect, retain, transfer to other service providers, and—in some cases—report to state authorities:

…information on payers and payees, accompanying transfers of funds, in any currency, and the information on originators and beneficiaries, accompanying transfers of crypto-assets, for the purposes of preventing, detecting and investigating money laundering and terrorist financing, where at least one of the payment or crypto-asset service providers involved in the transfer of funds or crypto-assets is established in the Union. (Article 1 TFR)

The TFR’s scope extends to money transfers between bank accounts or other payment accounts, as well as transfers of crypto assets other than peer-to-peer transfers without the involvement of a service provider (Article 2 TFR). Hence, the scope of the TFR includes, but is not limited to, all those who send or receive bank transfers. This constitutes the vast majority of adult EU residents.

The information that service providers are obligated to collect and retain (under Articles 4, 10, 14, and 21 TFR) include data that allow for the identification of both sides of a transfer of funds (the parties’ names, as well as the address, country, official personal document number, customer identification number, or the sender’s date and place of birth) and for linking their identity with the (payment or crypto-asset) account number or crypto-asset wallet address. The TFR also obligates service providers to collect and retain additional data to verify the accuracy of the identifying information “on the basis of documents, data or information obtained from a reliable and independent source” (Articles 4(4), 7(3), 14(5), 16(2) TFR). 

The scope of the obligation to collect and retain verification data is vague and is likely to lead service providers to require their customers to provide copies of passports, national ID documents, bank or payment-account statements, and utility bills, as is the case under the WTR and the 5th AML Directive. Such data is overwhelmingly likely to go beyond information on the civil identity of customers and will often, if not almost always, allow inferring even sensitive personal data about the customer.

The data-collection and retention obligations in the TFR are general and indiscriminate. No distinction is made in TFR’s data-collection and retention provisions based on likelihood of a connection with criminal activity, except for verification data in the case of transfers of funds (an exception not applicable to crypto assets). Even, the distinction in the case of verification data for transfers of funds (“has reasonable grounds for suspecting money laundering or terrorist financing”) arguably lacks the precision required under CJEU case law.

Analogies with the CJEU’s Passenger Name Records Decision

In late June, following its established approach in similar cases, the CJEU gave its judgment in the Ligue des droits humains case, which challenged the EU and Belgian regimes on passenger name records (PNR). The CJEU decided there that the applicable EU law, the PNR Directive, is valid under EU primary law. But it reached that result by interpreting some of the directive’s provisions in ways contrary to their express language and by deciding that some national legal rules implementing the directive are invalid. Some features of the PNR regime that were challenged by the court are strikingly similar to the TFR regime.

First, just like the TFR, the PNR rules imposed a five-year data-retention period for the data of all passengers, even where there is no “objective evidence capable of establishing a risk that relates to terrorist offences or serious crime having an objective link, even if only an indirect one, with those passengers’ air travel.” The court decided that this was a disproportionate restriction of the rights to privacy and to the protection of personal data under Articles 5-7 of the EU Charter of Fundamental Rights. Instead of invalidating the relevant article of the PNR Directive, the CJEU reinterpreted it as if it only allowed for five-year retention in cases where there is evidence of a relevant connection to criminality.

Applying analogous reasoning to the TFR, which imposes an indiscriminate five-year data retention period in its Article 21, the conclusion must be that this TFR provision is invalid under Articles 7-8 of the charter. Article 21 TFR may, at minimum, need to be recast to apply only to that transaction data where there is “objective evidence capable of establishing a risk” that it is connected to serious crime. The court also considered the issue of government access to data that has already been collected. Under the CJEU’s established interpretation of the EU Charter, “it is essential that access to retained data by the competent authorities be subject to a prior review carried out either by a court or by an independent administrative body.” In the PNR regime, at least some countries (such as Belgium) assigned this role to their “passenger information units” (PIUs). The court noted that a PIU is “an authority competent for the prevention, detection, investigation and prosecution of terrorist offences and of serious crime, and that its staff members may be agents seconded from the competent authorities” (e.g. from police or intelligence authorities). But according to the court:

That requirement of independence means that that authority must be a third party in relation to the authority which requests access to the data, in order that the former is able to carry out the review, free from any external influence. In particular, in the criminal field, the requirement of independence entails that the said authority, first, should not be involved in the conduct of the criminal investigation in question and, secondly, must have a neutral stance vis-a-vis the parties to the criminal proceedings …

The CJEU decided that PIUs do not satisfy this requirement of independence and, as such, cannot decide on government access to the retained data.

The TFR (especially its Article 19 on provision of information) does not provide for prior independent review of access to retained data. To the extent that such a review is conducted by Financial Intelligence Units (FIUs) under the AML Directive, concerns arise very similar to the treatment of PIUs under the PNR regime. While Article 32 of the AML Directive requires FIUs to be independent, that doesn’t necessarily mean that they are independent in the ways required of the authority that will decide access to retained data under Articles 7-8 of the EU Charter. For example, the AML Directive does not preclude the possibility of seconding public prosecutors, police, or intelligence officers to FIUs.

It is worth noting that none of the conclusions reached by the CJEU in the PNR case are novel; they are well-grounded in established precedent. 

A General Proportionality Argument

Setting aside specific analogies with previous cases, the TFR clearly has not been accompanied by a more general and fundamental reflection on the proportionality of its basic scheme in the light of the EU Charter. A pressing question is whether the TFR’s far-reaching restrictions of the rights established in Articles 7-8 of the EU Charter (and perhaps other rights, like freedom of expression in Article 11) are strictly necessary and proportionate. 

Arguably, the AML/CFT regime—including the travel rule—are significantly more costly and more rights-restricting than potential alternatives. The basic problem is that there is no reliable data on the relative effectiveness of measures like the travel rule. Defenders of the current AML/CFT regime focus on evidence that it contributes to preventing or prosecuting some crime. But this is not the relevant question when it comes to proportionality. The relevant question is whether those measures are as effective or more effective than alternative, less costly, and more privacy-preserving alternatives. One conservative estimate holds that AML compliance costs in Europe were “120 times the amount successfully recovered from criminals’ and exceeded the estimated total of criminal funds (including funds not seized or identified).” 

The fact that the current AML/CFT regime is a de facto global standard cannot serve as a sufficient justification either, given that EU fundamental law is perfectly comfortable in rejecting non-European law-enforcement practices (see the CJEU’s decision in Schrems). The travel rule has been unquestioningly imported to EU law from U.S. law (via FATF), where the standards of constitutional protection of privacy are much different than under the EU Charter. This fact would likely be noticed by the Court of Justice in any putative challenge to the TFR or other elements of the AML/CFT regime. 

Here, I only flag the possibility of a general proportionality challenge. Much more work needs to be done to flesh it out.

Conclusion

Due to the political and resource constraints of the EU legislative process, it is possible that the legislative proposals in the financial-regulation package did not receive sufficient legal scrutiny from the perspective of their compatibility with the EU Charter of Fundamental Rights. This hypothesis would explain the presence of seemingly clear violations, such as the indiscriminate five-year data-retention period. Given that none of the proposals has, as yet, been voted into law, making the legislators aware of the problem may help to address at least some of the issues.

Legal arguments about the AML/CFT regime’s incompatibility with the EU Charter should be accompanied with concrete alternative proposals to achieve the goals of preventing and combating serious crime that, according to the best evidence, the current AML/CFT regime does ineffectively. We need more regulatory imagination. For example, one part of the solution may be to properly staff and equip government agencies tasked with prosecuting financial crime.

But it’s also possible that the proposals, including the TFR, will be adopted broadly without amendment. In that case, the main recourse available to EU citizens (or to any EU government) will be to challenge the legality of the measures before the Court of Justice.