Biweekly FTC Roundup: Bureau of Let’s-Sue-Meta Edition

Cite this Article
Daniel J. Gilman, Biweekly FTC Roundup: Bureau of Let’s-Sue-Meta Edition, Truth on the Market (May 05, 2023), https://truthonthemarket.com/2023/05/05/biweekly-ftc-roundup-bureau-of-lets-sue-meta-edition/

The Federal Trade Commission (FTC) might soon be charging rent to Meta Inc. The commission earlier this week issued (bear with me) an “Order to Show Cause why the Commission should not modify its Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (July 27, 2012), as modified by Order Modifying Prior Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (Apr. 27, 2020).”

It’s an odd one (I’ll get to that) and the third distinct Meta matter for the FTC in 2023.

Recall that the FTC and Meta faced off in federal court earlier this year, as the commission sought a preliminary injunction to block the company’s acquisition of virtual-reality studio Within Unlimited. As I wrote in a prior post, U.S. District Court Judge Edward J. Davila denied the FTC’s request in late January. Davila’s order was about more than just the injunction: it was predicated on the finding that the FTC was not likely to prevail in its antitrust case. That was not entirely surprising outside FTC HQ (perhaps not inside either), as I was but one in a long line of observers who had found the FTC’s case to be weak.

No matter for the not-yet-proposed FTC Bureau of Let’s-Sue-Meta, as there’s another FTC antitrust matter pending: the commission also seeks to unwind Facebook’s 2012 acquisition of Instagram and its 2014 acquisition of WhatsApp, even though the FTC reviewed both mergers at the time and allowed them to proceed. Apparently, antitrust apples are never too old for another bite. The FTC’s initial case seeking to unwind the earlier deals was dismissed, but its amended complaint has survived, and the case remains to be heard.

Back to the modification of the 2020 consent order, which famously set a record for privacy remedies: $5 billion, plus substantial behavioral remedies to run for 20 years (with the monetary penalty exceeding the EU’s highest by an order of magnitude). Then-Chair Joe Simons and then-Commissioners Noah Phillips and Christine Wilson accurately claimed that the settlement was “unprecedented, both in terms of the magnitude of the civil penalty and the scope of the conduct relief.” Two commissioners—Rebecca Slaughter and Rohit Chopra—dissented: they thought the unprecedented remedies inadequate.

I commend Chopra’s dissent, if only as an oddity. He rightly pointed out that the commissioners’ analysis of the penalty was “not empirically well grounded.” At no time did the commission produce an estimate of the magnitude of consumer harm, if any, underlying the record-breaking penalty. It never claimed to.

That’s odd enough. But then Chopra opined that “a rigorous analysis of unjust enrichment alone—which, notably, the Commission can seek without the assistance of the Attorney General—would likely yield a figure well above $5 billion.” That subjective likelihood also seemed to lack an empirical basis; certainly, Chopra provided none.

By all accounts, then, the remedies appeared to be wholly untethered from the magnitude of consumer harm wrought by the alleged violations. To be clear, I’m not disputing that Facebook violated the 2012 order, such that a 2019 complaint was warranted, even if I wonder now, as I wondered then, how a remedy that had nothing to do with the magnitude of harm could be an efficient one.

Now, Commissioner Alvaro Bedoya has issued a statement correctly acknowledging that “[t]here are limits to the Commission’s order modification authority.” Specifically, the commission must “identify a nexus between the original order, the intervening violations, and the modified order.” Bedoya wrote that he has “concerns about whether such a nexus exists” for one of the proposed modifications. He still voted to go ahead with the proposal, as did Slaughter and Chair Lina Khan, voicing no concerns at all.

It’s odder, still. In its heavily redacted order, the commission appears to ground its proposal in conduct alleged to have occurred before the 2020 order that it now seeks to modify. There are no intervening violations there. For example:

From December 2017 to July 2019, Respondent also made misrepresentations relating to its Messenger Kids (“MK”) product, a free messaging and video calling application “specifically intended for users under the age of 13.”

. . . [Facebook] represented that MK users could communicate in MK with only parent-approved contacts. However, [Facebook] made coding errors that resulted in children participating in group text chats and group video calls with unapproved contacts under certain circumstances.

Perhaps, but what circumstances? According to Meta (and the FTC), Meta discovered, corrected, and reported the coding errors to the FTC in 2019. Of course, Meta is bound to comply with the 2020 Consent Order. But were they bound to do so in 2019? They’ve always been subject to the FTC’s “unfair and deceptive acts and practices” (UDAP) authority, but why allege 2019 violations now?

What harm is being remedied? On the one hand, there seems to have been an inaccurate statement about something parents might care about: a representation that users could communicate in Messenger Kids only with parent-approved contacts. On the other hand, there’s no allegation that such communications (with approved contacts of the approved contacts) led to any harm to the kids themselves.

Given all of that, why does the commission seek to impose substantial new requirements on Meta? For example, the commission now seeks restrictions on Meta:

…collecting, using, selling, licensing, transferring, sharing, disclosing, or otherwise benefitting from Covered Information collected from Youth Users for the purposes of developing, training, refining, improving, or otherwise benefitting Algorithms or models; serving targeted advertising, or enriching Respondent’s data on Youth users.

There’s more, but that’s enough to have “concerns about” the existence of a nexus between the since-remedied coding errors and the proposed “modification.” Or to put it another way, I wonder what one has to do with the other.

The only violation alleged to have occurred after the 2020 consent order was finalized has to do with the initial 2021 report of the assessor—an FTC-approved independent monitor of Facebook/Meta’s compliance—covering the period from October 25, 2020 to April 22, 2021. There, the assessor reported that:

 …the key foundational elements necessary for an effective [privacy] program are in place . . . [but] substantial additional work is required, and investments must be made, in order for the program to mature.

We don’t know what this amounts to. The initial assessment reported that the basic elements of the firm’s “comprehensive privacy program” were in place, but that substantial work remained. Did progress lag expectations? What were the failings? Were consumers harmed? Did Facebook/Meta fail to address deficiencies identified in the report? If so, for how long? We’re not told a thing.

Again, what’s the nexus? And why the requirement that Meta “delete Covered Information collected from a User as a Youth unless [Meta] obtains Affirmative Express Consent from the User within a reasonable time period, not to exceed six (6) months after the User’s eighteenth birthday”? That’s a worry, not because there’s nothing there, but because substantial additional costs are being imposed without any account of their nexus to consumer harm, supposing there is one.

Some might prefer such an opt-in policy—one of two that would be required under the proposed modification—but it’s not part of the 2020 consent agreement and it’s not otherwise part of U.S. law. It does resemble a requirement under the EU’s General Data Protection Regulation. But the GDPR is not U.S. law and there are good reasons for that— see, for example, hereherehere, and here.

For one thing, a required opt-in for all such information, in all the ways that it may live on in the firm’s data and models—can be onerous for users and not just the firm. Will young adults be spared concrete harms because of the requirement? It’s highly likely that they’ll have less access to information (and to less information), but highly unlikely that the reduction will be confined to that to which they (and their parents) would not consent. What will be the net effect?

Requirements “[p]rior to … introducing any new or modified products, services, or features” raise a question about the level of grain anticipated, given that limitations on the use of covered information apply to the training, refining, or improving of any algorithm or model, and that products, services, or features might be modified in various ways daily, or even in real time. Any such modifications require that the most recent independent assessment report find that all the many requirements of the mandated privacy program have been met. If not, then nothing new—including no modifications—is permitted until the assessor provides written confirmation that all material gaps and weaknesses have been “fully” remediated.

Is this supposed to entail independent oversight of every design decision involving information from youth users? Automated modifications? Or that everything come to a halt if any issues are reported? I gather that nobody—not even Meta—proposes to give the company carte blanche with youth information. But carte blanque?

As we’ve been discussing extensively at today’s International Center for Law & Economics event on congressional oversight of the commission, the FTC has a dual competition and consumer-protection enforcement mission. Efficient enforcement of the antitrust laws requires, among other things, that the costs of violations (including remedies) reflect the magnitude of consumer harm. That’s true for privacy, too. There’s no route to coherent—much less complementary—FTC-enforcement programs if consumer protection imposes costs that are wholly untethered from the harms it is supposed to address.