It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the European Commission began a series of workshops with the companies designated as “gatekeepers” under the DMA: Apple, Meta, Alphabet, Amazon, ByteDance, and Microsoft. And even as those workshops were still ongoing, the Commission announced noncompliance investigations against Alphabet, Apple, and Meta. Finally, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) held its own session on DMA implementation.
Many aspects of those developments are worth commenting on, and you can expect more competition-related analysis on Truth on the Market soon. Here, I will focus on what these developments mean for data privacy and security.
Before the DMA’s enactment, I raised serious concerns about how its rules would affect user privacy and security, as I argued that mandated interoperability is a key concern that poses significant risks for users. I also noted that mandating an option of app “sideloading” entails taking away from users the choice of the enhanced security features offered by the “walled garden” model. Indeed, privacy and security concerns were largely sidelined in the DMA’s legislative process. Instead of genuinely considering the very real tradeoffs, legislators resorted to a “pass the buck” strategy.
Gatekeepers’ Responsibility for Service Privacy and Security
The DMA’s text offers a few grudging and vague permissions (not even obligations) for gatekeepers to do only what is “strictly necessary” for some (not all) aspects of service privacy and security. It is hard to avoid the impression that the legislators hoped that, when something does ultimately go wrong, the blame will fall on the gatekeepers, and not on poorly thought-through legislation.
During the Parliament’s DMA session, European Commissioner for Competition Margrethe Vestager said:
I also see part of the communication against the DMA most obviously saying, well, your gadget is not safe anymore because of the DMA. That is complete nonsense. It has nothing to do with each other. It is for the companies to decide how will they present their services, their operating system, how will they make them safe for you and comply with the DMA.
Setting aside her defensive hyperbole about the “nonsense” of the DMA reducing safety, Vestager clearly identified the gatekeepers as those who will be responsible for complying with the DMA while ensuring their services’ safety. This has important practical consequences, especially for interoperability, data portability, and sideloading. It means, e.g., that when the DMA requires gatekeepers to interoperate with others, it is up to the gatekeepers to ensure that would-be interoperators are reliable. As I wrote two years ago:
If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.
It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.
During the DMA workshop, Meta’s representative made a similar point, especially with respect to the interoperability of WhatsApp and Messenger:
… it’s worth emphasizing at the outset that the legal obligation to provide interop in a safe and secure way falls on the gatekeeper alone, not on third parties. So, in some ways, this is different from a traditional interoperability obligation that you might see in telecoms, which is more typically symmetrical in fashion.
This was not questioned by Commission officials, who also clearly noted the DMA’s privacy and security safeguards for messaging interoperability.
Meta’s stated approach to compliance with the messaging-interoperability mandate is to provide effective interoperability “while maximizing the security, privacy, and safety of users.” They argued that WhatsApp’s current client-server architecture enables features like spam detection and enforcement against bad actors, which helps to keep users safe. In opening WhatsApp’s network up to third parties, Meta’s representative said the company is following the same client-server infrastructure that WhatsApp uses today. This enables WhatsApp to rely on the same signals it receives to run integrity checks and identify abusive accounts. In response to feedback, a proxy-server option was added between third-party clients and WhatsApp’s server.
Still, Meta’s representative noted that this reduces WhatsApp’s ability to maximize security, privacy, and safety, relative to the client-server model alone. To preserve end-to-end encryption, WhatsApp’s interoperability solution relies on the same Signal protocol that WhatsApp uses today. Meta’s representative acknowledged, however, that interoperability inherently means that WhatsApp no longer has the same visibility and control over endpoints. This means that it can’t guarantee only the sender and recipient can access messages across all endpoints, as it does currently.
The DMA requires users to opt-in to receive messages from third-party services. Meta’s representative said this gives users control and avoids exposing them to third-party contact without permission. Meta believes it is essential that users can readily identify differences between interoperable and WhatsApp chats. As WhatsApp loses endpoint control, third parties may use different encryption. Proxy use affects integrity controls and there are inherent functionality differences.
Regarding the DMA’s rules on user data portability, Meta brought up the Cambridge Analytica controversy from 2018. The irony is that Meta could now be forced to undo the safeguards that it implemented in response to that scandal. Meta claims to have adapted its data-portability tools to address privacy stakeholder concerns in ways that would not impede the portability it wants to offer. There is, however, an inevitable tension between the interests of third-party service providers and safeguarding users from attacks and manipulation. The option for a user to instantaneously send all their Facebook data to any recipient with a simple click could easily become a godsend for bad actors.
The question remains how to make it less likely that users will be induced (through “dark patterns” and otherwise) to authorize such data transfers, even if they will instantly regret them once they learn of the consequences? Businesses that want to benefit from data portability will downplay this issue by pointing out that they are good, trustworthy actors. The real question, however, is what kind of measures should be implemented to ensure that untrustworthy actors do not benefit from portability.
Because responsibility lies with the gatekeeper, to the extent that app sideloading and third-party app stores are mandated, gatekeepers may still need to ensure that those changes do not significantly diminish users’ level of privacy and security. In their DMA workshop, Apple representatives discussed “notarization” as a key safeguard introduced in iOS to protect users when sideloading apps. Notarization involves automated and human-review processes to ensure that apps are free from known malware, that they function as described, and that they don’t expose users to egregious fraud.
Access to search-query data, which was discussed during the Alphabet/Google DMA workshop, also poses privacy concerns. Google representatives emphasized that some search queries contain very sensitive personal data. This creates a need to carefully balance the opportunities DMA creates for other search engines with the attendant risks to user privacy. To address those risks, Google uses a frequency-thresholding method, where the dataset only includes queries entered at least 30 times globally over 13 months by signed-in users. Google’s representative noted that it is difficult to balance, especially within smaller regions, getting the data out while anonymizing as required. Google found that for smaller jurisdictions, however, the “query plus the country plus the URL” can lead to exposure of personal data.
Will Gatekeepers Be Allowed to Protect Service Privacy and Security?
Officials charged with DMA enforcement may be tempted to pretend that some privacy and security problems do not exist, or to dismiss gatekeepers’ privacy- and security-enhancing measures as taken only to protect their self-interest. This could limit gatekeepers from adopting the service privacy and security measures needed to address risks that the DMA itself created.
The first temptation could be to argue that the existence of laws like the General Data Protection Regulation (GDPR) significantly limits gatekeepers’ responsibility. In other words, “because of the GDPR,” there is not much that could be “strictly necessary” for a gatekeeper to do to protect user privacy. This would, however, ignore the realities of privacy-law enforcement, as I have written:
… solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.
The second temptation stems from the fact that some privacy- and security-protecting measures do, in fact, align with gatekeeper business interests beyond their interest to uphold a reputation of providing safe services. The uncomfortable truth for the DMA’s authors is that increasing personal data privacy and security by limiting data sharing and keeping it under “lock and key” is in the economic interest of the large service providers, to whom users are happy to provide “first party” data. A user has arguably more control over what Facebook does with their data for advertising purposes than with what happens in the “open advertising ecosystem,” often involving data sharing on a massive scale. As Eric Seufert noted in “Content Fortresses and the new privacy landscape”:
The market shift being observed now with Content Fortresses is ultimately a consolidation trend that collapses the various layers of consumer technology up into the apex predator platform companies.
DMA enforcement that chooses to remain blind to this duality of interest will only see the gatekeepers acting in their self-interest, dismissing the user benefits of such actions. Moreover, it will fail to account for the risks that come from the (very much self-interested) actions of those businesses who wish to gain from DMA enforcement.
Conclusion
Based on the events of the past two weeks, do we have any reasons to think that the implementation process will cure the defects of the legislative process? A full cure is unlikely, given that some security-lessening choices are among the DMA’s core features (e.g., sideloading), and their risks can only be mitigated, not removed. Some of the measures taken by the gatekeepers suggest that the companies will use the DMA’s permissions to make sure that compliance efforts do not put their users at risk.
The DMA enforcers may, however, succumb to the temptation of pretending that some privacy and security problems do not exist (or are “solved” by the GDPR) or the related temptation of dismissing privacy- and security-enhancing measures as taken only in the gatekeeper’s self-interest. The result could be that the gatekeepers will be prevented from implementing some needed safeguards. And when things go wrong, the officials will deflect the blame on the gatekeepers.