Archives For

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

Protectionism Under the Guise of Cybersecurity

Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

Do Security and Privacy Require Protectionism?

As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

Digital Sovereignty as Industrial Policy

Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

Effect on startups

Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

Direct investment

Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

Conclusion

Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.