Archives For Facebook

The Federal Trade Commission (FTC) might soon be charging rent to Meta Inc. The commission earlier this week issued (bear with me) an “Order to Show Cause why the Commission should not modify its Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (July 27, 2012), as modified by Order Modifying Prior Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (Apr. 27, 2020).”

It’s an odd one (I’ll get to that) and the third distinct Meta matter for the FTC in 2023.

Recall that the FTC and Meta faced off in federal court earlier this year, as the commission sought a preliminary injunction to block the company’s acquisition of virtual-reality studio Within Unlimited. As I wrote in a prior post, U.S. District Court Judge Edward J. Davila denied the FTC’s request in late January. Davila’s order was about more than just the injunction: it was predicated on the finding that the FTC was not likely to prevail in its antitrust case. That was not entirely surprising outside FTC HQ (perhaps not inside either), as I was but one in a long line of observers who had found the FTC’s case to be weak.

No matter for the not-yet-proposed FTC Bureau of Let’s-Sue-Meta, as there’s another FTC antitrust matter pending: the commission also seeks to unwind Facebook’s 2012 acquisition of Instagram and its 2014 acquisition of WhatsApp, even though the FTC reviewed both mergers at the time and allowed them to proceed. Apparently, antitrust apples are never too old for another bite. The FTC’s initial case seeking to unwind the earlier deals was dismissed, but its amended complaint has survived, and the case remains to be heard.

Back to the modification of the 2020 consent order, which famously set a record for privacy remedies: $5 billion, plus substantial behavioral remedies to run for 20 years (with the monetary penalty exceeding the EU’s highest by an order of magnitude). Then-Chair Joe Simons and then-Commissioners Noah Phillips and Christine Wilson accurately claimed that the settlement was “unprecedented, both in terms of the magnitude of the civil penalty and the scope of the conduct relief.” Two commissioners—Rebecca Slaughter and Rohit Chopra—dissented: they thought the unprecedented remedies inadequate.

I commend Chopra’s dissent, if only as an oddity. He rightly pointed out that the commissioners’ analysis of the penalty was “not empirically well grounded.” At no time did the commission produce an estimate of the magnitude of consumer harm, if any, underlying the record-breaking penalty. It never claimed to.

That’s odd enough. But then Chopra opined that “a rigorous analysis of unjust enrichment alone—which, notably, the Commission can seek without the assistance of the Attorney General—would likely yield a figure well above $5 billion.” That subjective likelihood also seemed to lack an empirical basis; certainly, Chopra provided none.

By all accounts, then, the remedies appeared to be wholly untethered from the magnitude of consumer harm wrought by the alleged violations. To be clear, I’m not disputing that Facebook violated the 2012 order, such that a 2019 complaint was warranted, even if I wonder now, as I wondered then, how a remedy that had nothing to do with the magnitude of harm could be an efficient one. 

Now, Commissioner Alvaro Bedoya has issued a statement correctly acknowledging that “[t]here are limits to the Commission’s order modification authority.” Specifically, the commission must “identify a nexus between the original order, the intervening violations, and the modified order.” Bedoya wrote that he has “concerns about whether such a nexus exists” for one of the proposed modifications. He still voted to go ahead with the proposal, as did Slaughter and Chair Lina Khan, voicing no concerns at all.

It’s odder, still. In its heavily redacted order, the commission appears to ground its proposal in conduct alleged to have occurred before the 2020 order that it now seeks to modify. There are no intervening violations there. For example:

From December 2017 to July 2019, Respondent also made misrepresentations relating to its Messenger Kids (“MK”) product, a free messaging and video calling application “specifically intended for users under the age of 13.”

. . . [Facebook] represented that MK users could communicate in MK with only parent-approved contacts. However, [Facebook] made coding errors that resulted in children participating in group text chats and group video calls with unapproved contacts under certain circumstances.

Perhaps, but what circumstances? According to Meta (and the FTC), Meta discovered, corrected, and reported the coding errors to the FTC in 2019. Of course, Meta is bound to comply with the 2020 Consent Order. But were they bound to do so in 2019? They’ve always been subject to the FTC’s “unfair and deceptive acts and practices” (UDAP) authority, but why allege 2019 violations now?

What harm is being remedied? On the one hand, there seems to have been an inaccurate statement about something parents might care about: a representation that users could communicate in Messenger Kids only with parent-approved contacts. On the other hand, there’s no allegation that such communications (with approved contacts of the approved contacts) led to any harm to the kids themselves.

Given all of that, why does the commission seek to impose substantial new requirements on Meta? For example, the commission now seeks restrictions on Meta:

…collecting, using, selling, licensing, transferring, sharing, disclosing, or otherwise benefitting from Covered Information collected from Youth Users for the purposes of developing, training, refining, improving, or otherwise benefitting Algorithms or models; serving targeted advertising, or enriching Respondent’s data on Youth users.

There’s more, but that’s enough to have “concerns about” the existence of a nexus between the since-remedied coding errors and the proposed “modification.” Or to put it another way, I wonder what one has to do with the other.

The only violation alleged to have occurred after the 2020 consent order was finalized has to do with the initial 2021 report of the assessor—an FTC-approved independent monitor of Facebook/Meta’s compliance—covering the period from October 25, 2020 to April 22, 2021. There, the assessor reported that:

 …the key foundational elements necessary for an effective [privacy] program are in place . . . [but] substantial additional work is required, and investments must be made, in order for the program to mature.

We don’t know what this amounts to. The initial assessment reported that the basic elements of the firm’s “comprehensive privacy program” were in place, but that substantial work remained. Did progress lag expectations? What were the failings? Were consumers harmed? Did Facebook/Meta fail to address deficiencies identified in the report? If so, for how long? We’re not told a thing. 

Again, what’s the nexus? And why the requirement that Meta “delete Covered Information collected from a User as a Youth unless [Meta] obtains Affirmative Express Consent from the User within a reasonable time period, not to exceed six (6) months after the User’s eighteenth birthday”? That’s a worry, not because there’s nothing there, but because substantial additional costs are being imposed without any account of their nexus to consumer harm, supposing there is one.

Some might prefer such an opt-in policy—one of two that would be required under the proposed modification—but it’s not part of the 2020 consent agreement and it’s not otherwise part of U.S. law. It does resemble a requirement under the EU’s General Data Protection Regulation. But the GDPR is not U.S. law and there are good reasons for that— see, for example, here, here, here, and here.

For one thing, a required opt-in for all such information, in all the ways that it may live on in the firm’s data and models—can be onerous for users and not just the firm. Will young adults be spared concrete harms because of the requirement? It’s highly likely that they’ll have less access to information (and to less information), but highly unlikely that the reduction will be confined to that to which they (and their parents) would not consent. What will be the net effect?

Requirements “[p]rior to … introducing any new or modified products, services, or features” raise a question about the level of grain anticipated, given that limitations on the use of covered information apply to the training, refining, or improving of any algorithm or model, and that products, services, or features might be modified in various ways daily, or even in real time. Any such modifications require that the most recent independent assessment report find that all the many requirements of the mandated privacy program have been met. If not, then nothing new—including no modifications—is permitted until the assessor provides written confirmation that all material gaps and weaknesses have been “fully” remediated.

Is this supposed to entail independent oversight of every design decision involving information from youth users? Automated modifications? Or that everything come to a halt if any issues are reported? I gather that nobody—not even Meta—proposes to give the company carte blanche with youth information. But carte blanque?

As we’ve been discussing extensively at today’s International Center for Law & Economics event on congressional oversight of the commission, the FTC has a dual competition and consumer-protection enforcement mission. Efficient enforcement of the antitrust laws requires, among other things, that the costs of violations (including remedies) reflect the magnitude of consumer harm. That’s true for privacy, too. There’s no route to coherent—much less complementary—FTC-enforcement programs if consumer protection imposes costs that are wholly untethered from the harms it is supposed to address. 

Regulators around the globe are scrambling for a silver bullet to “tame” tech companies. Whether it’s the United States, the United Kingdom, Australia, South Africa, or Canada, the animating rationale behind such efforts is that firms like Google, Apple, Meta, and Amazon (GAMA) engage in undesirable market conduct that falls beyond the narrow purview of antitrust law (here and here).

To tackle these supposed ills, which range from exclusionary practices and disinformation to encroachments on privacy and democratic institutions, it is asserted that sweeping new ex ante rules must be enacted and the playing field tilted in favor of enforcement agencies, which have hitherto faced what advocates characterize as insurmountable procedural hurdles (here and here).

Amid these international calls for regulatory intervention, the EU’s Digital Markets Act (DMA) has been seen as a lodestar by advocates of more aggressive competition policy. Beyond addressing social anxieties about unchecked tech power, the DMA’s primary appeal is that it claims to strive for two goals with almost universal appeal: fairness and market contestability.

Unfortunately, the DMA is not the paragon of regulation that it is sometimes made out to be. Indeed, the law is structured less to forward any purportedly universal set of principles, but instead to align digital platforms’ business models with an idiosyncratic and specifically European industrial policy, rooted in politics and protectionism. As explained below, it is unlikely other countries would benefit from emulating this strategy.

The DMA’s Protectionist Origins

While the DMA is today often lauded as eminently pro-competition (here and here), prior to its adoption, many leading European politicians were touting the text as a protectionist industrial-policy tool that would hinder U.S. firms to the benefit of European rivals: a far cry from the purely consumer-centric tool it is sometimes made out to be. French Minister of the Economy Bruno Le Maire, for example, acknowledged as much in 2021 when he said:

Digital giants are not just nice companies with whom we need to cooperate, they are rivals, rivals of the states that do not respect our economic rules, which must therefore be regulated… There is no political sovereignty without technological sovereignty. You cannot claim sovereignty if your 5G networks are Chinese, if your satellites are American, if your launchers are Russian and if all the products are imported from outside.

This logic dovetails neatly with the EU’s broader push for “technology sovereignty,” a strategy intended to reduce the continent’s dependence on technologies that originate abroad. The strategy already has been institutionalized at different levels of EU digital and industrial policy (see here and here). In fact, the European Parliament’s 2020 Briefing on “Digital Sovereignty for Europe” explicitly anticipates that an ex ante regulatory regime similar to the DMA would be a central piece of that puzzle. French President Emmanuel Macron summarized it well when he said:

If we want technological sovereignty, we’ll have to adapt our competition law, which has perhaps been too much focused solely on the consumer and not enough on defending European champions.

Moreover, it can be argued that the DMA was never intended to promote European companies that could seriously challenge the dominance of U.S. firms (see here at 13:40-14:20). Rather, the goal was always to redistribute rents across the supply chain away from digital platforms and toward third parties and competitors (what is referred to as “business users,” as opposed to “end users”). After all, with the arguable exception of Spotify and Booking.com, the EU has none of the former, and plenty of the latter. Indeed, as Pablo Ibañez Colomo has written:

The driver of many disputes that may superficially be seen as relating to leveraging can be more rationalised, more convincingly, as attempts to re-allocate rents away from vertically-integrated incumbents to rivals.

Alternative Digital Strategies to the DMA

While the DMA strives to use universal language and has a clear ambition to set global standards, under this veneer of objectivity lies a very particular vision of industrial policy and a certain normative understanding of how rents should be allocated across the value chain. That vision is not apt for everyone and, indeed, may not be apt for anyone (see here). Other countries can certainly look to the EU for inspiration and, admittedly, it would be ludicrous to expect them to ignore what goes on in the bloc.

When deciding whether and what sort of legislation to enact, however, other countries should ultimately seek those approaches that are appropriate to their own context. What they ought not do is reflexively copy templates made with certain goals in mind, which they might not share and which may be diametrically opposed to their own interests or values. Below are some suggestions for alternative strategies to the DMA.

Doubling Down on Sound Competition Laws

Mounting evidence suggests that tech companies increasingly consider the costs of regulatory compliance in planning their business strategy. For example, Meta is reportedly considering shutting down political advertising in Europe to avoid the hassle of complying with the EU’s upcoming rules on online campaigning. Just this week, it was revealed that Twitter may be considering pulling out of the EU because it doesn’t have the capacity to comply with the Code of Practice on Disinformation, a “voluntary” agreement that the Digital Services Act (DSA) will nevertheless make binding.

While perhaps the EU—the world’s third largest economy—can afford to impose costly and burdensome regulation on digital companies because it has considerable leverage to ensure (with some, though as we have seen, by no means absolute, certainty) that they will not desert the European market, smaller economies that are unlikely to be seen by GAMA as essential markets are playing a different game.

Not only do they have much smaller carrots to dangle, but they also disproportionately benefit from the enormous infrastructural investments and consumer benefits brought by GAMA (see, for example, here and here). In this context, the wiser strategy for smaller, ostensibly “nonessential” markets might be to court GAMA, rather than to castigate it. Instead of imposing intricate, costly, and untested regulatory obligations on digital platforms, these countries may reasonably wish to emphasize or bolster the transparency, predictability, and procedural safeguards (including credible judicial review) of their competition-law systems. After all, to regulate competition, you must first attract it.

Indeed, while competition is as important in developing markets as developed ones, developing markets are especially dependent upon competition rules that encourage investment in infrastructure to facilitate economic growth and that offer a secure environment for ongoing innovation. Particularly for relatively young, rapidly evolving industries like digital markets, attracting consistent investment and industry know-how ensures that such markets can innovate and transition into maturity (here and here).

Moreover, the case-by-case approach of competition law allows enforcers to tackle harmful behavior while capturing digital platforms’ procompetitive benefits, rather than throwing the baby out with the bathwater by imposing blanket prohibitions. As Giuseppe Colangelo has suggested, the assumption that competition laws are insufficient to tackle anticompetitive conduct in digital markets is a questionable one, given that most of the DMA’s contemplated prohibitions have also been the object of separate antitrust suits in the EU.

Careful Consideration of Costs and Unintended Consequences

DMA-style ex ante regulation is still untested. Its benefits, if any, still remain mostly theoretical. A tradeoff between, say, foreign direct investment (FDI) and ex ante regulation might make sense for some emerging markets if it was clear what was being traded, and at what cost. Alas, such regulations are still in an incipient phase.

The U.S. antitrust bills targeting a handful of companies seem unlikely to be adopted soon; the UK’s Digital Markets Unit proposal has still not been put to Parliament; and Japan and South Korea have imposed codes of conduct only in narrow areas. Even the DMA—the most comprehensive legislative attempt to “rein in” digital companies—entered into force only last October, and it will not start imposing its obligations on gatekeepers until February or March 2024, at the earliest.

At the same time, there are a range of risks and possible unintended consequences associated with the DMA, such as the privacy dangers of sideloading and interoperability mandates; worsening product quality as a result of blanket bans on self-preferencing; decreased innovation; obstruction of the rule of law; and double and even triple jeopardy because of the overlaps between the DMA and EU competition rules. 

Despite the uncertainty inherent in deploying experimental regulation in a fast-moving market, the EU has clearly decided that these risks are not sufficient to offset the DMA’s benefits (see here for a critical appraisal). But other countries should not take their word for it.

In conducting an independent examination, they may place more value on some of the DMA’s expected negative consequences, or may find their likelihood of occurring to be unacceptably high. This could be due to endogenous or highly context-dependent factors. In some cases, the tradeoff could mean too large a sacrifice of FDI, while in others, the rules could impinge on legitimate policy priorities, like national security. In either case, countries should evaluate the risks and benefits of the ex ante regulation of digital platforms themselves, and go their own way.

Conclusion

There are, of course, other good reasons why the DMA shouldn’t be so readily emulated by everyone, everywhere, all at once.

Giving enforcers wide discretionary powers to reshape digital markets and override product-design decisions might not be a good idea in countries with a poor track record of keeping corruption in check, or where enforcers lack the required know-how to do so effectively. Simple norms, backed by the rule of law, may not be sufficient to counteract these background conditions. But they also may be preferable to the broad mandates and tools envisioned by the kinds of ex ante regulatory proposals currently in vogue.

Smaller countries with limited budgets would probably also benefit more from castigating unequivocally harmful (and widespread) conduct, like cartels (the “cancers of the market economy”), bid rigging, distortive state aid, and mergers that create actual monopolies (see, for example, here and here), rather than applying experimental regulation underpinned by tenuous theories of harm and indeterminate benefits .

In the end, the DMA has been mistakenly taken to be a panacea or a blueprint for how to regulate tech, when it is neither of these two things. It is, instead, a particularistic approach that may or may not achieve its stated goals. In any case, it must be understood as an outgrowth of a certain industrial-policy strategy and a sui generis vision of how digital markets should distribute rents (spoiler alert: in the interest of European companies).

Last week’s roundup was postponed because I was kibbitzing at the spring meeting of the American Bar Association (ABA) Antitrust Section. For those outside the antitrust world, the spring meeting is the annual antitrust version of Woodstock. For those inside the antitrust world: Antitrust Woodstock is not really a thing. At the planetary-orbit level, the two events are similar in that they comprise times that are alternately engaging, interesting, fun, odd, and stultifying. There were more than 3,500 competition lawyers and economists in one place, if not one room. Imagine it, then pour yourself a good stiff drink. 

With apologies—this says nothing flattering about me—my spring meeting highlight was a bit of a Freudian slip by Bill Baer, the former head of the U.S. Justice Department’s (DOJ) Antitrust Division. Voicing support for the Biden administration’s antitrust policies and personnel, Baer expressed admiration for the Tim Wu book “The Curse of Business.” A most excellent and fitting title, if not precisely the one on the book’s cover. Your (occasionally) humble antediluvian scribe learnt about antitrust law and economics so long ago that I still imagine that consumer welfare matters (many consumers are actually people, it turns out) and that antitrust is supposed to protect commerce, not curse it.  

As a former enforcer with friends still inside the building, not a few sessions seemed to me very, very enforcement-friendly, as if someone had confused a perspective with the perspective. The enforcers were very much on-message. It’s full speed ahead on enforcement and regulation, some conspicuous setbacks in the courts notwithstanding. 

Curiously, they seem to regard some of the losses as wins. In February, I briefly described U.S. District Court Judge Edward J. Davila’s order denying the Federal Trade Commission’s (FTC) request for a preliminary injunction to block Meta’s proposed acquisition of virtual-reality fitness-app maker Within. The denial was not so preliminary, as it rested on a finding that “the FTC has not demonstrated a likelihood of ultimate success on the merits.” Reading the writing on the wall, and in the order, the FTC then dropped the matter.

At the spring meeting, however, we heard detailed and satisfied reports about the court endorsing the FTC’s theory of the case as a potentially viable theory, but only clipped, sotto voce recognition of the fact that they lost. That is, a federal district court, setting no precedent, recognized that there were such things as viable potential competition cases. Right. And the FTC’s case was not one of them. Is there such a thing as a Pyrrhic loss? 

More FTC Departures Made Public

Everybody rightly notices the appointees—Commissioner Christine S. Wilson’s last day coincided with the last day of the Spring Meeting – but let’s not forget about the staff. Michael Vita, deputy director of the Bureau of Economics, retired, and that’s a loss for the FTC. Some of Mike’s work is still posted here. Note that Mike helped to kick off the FTC’s famous hospital-merger retrospective study program before it was a program. He did rather a lot. Cheers to Mike.

I also learned about the departure of Holly Vedova, Chair Lina Khan’s first director of the Bureau of Competition, and author of the fabled “Vedova letters.” And Elizabeth Kraus, who did a great deal for the FTC’s international program, is also out the door, as was Randy Tritell earlier in the administration. 

A Not Completely Unreasonable Click-to-Cancel Rule

Some version of this could be right, if not this one.

On March 23, the FTC proposed a “click to cancel” amendment to its Negative Option Rule. I’ll discuss this more fully in a later post, but for now, I’d suggest two high-level observations:

  1. The proposed rule is overly broad; but
  2. There is at least a real problem in the area, and one that might be properly amenable to FTC consumer-protection rulemaking.

That is, firms sometimes make it so hard to cancel various types of contracts—such as automatic renewals—that there’s one or another species of fraud at work. The initial offer was deceptive, or they’re imposing an undue (and unforeseen) tax on consumers, or they’re foisting a supposed contract-in-perpetuity on unsuspecting consumers, and collecting funds without real authorization. Or all of the above. All actionable, and perhaps there’s a viable and well-tailored rule in there somewhere.

That doesn’t mean that the FTC has proposed the right or correctly focused regulation, but there is, at least, a there there. I recommend Commissioner Wilson’s final dissent, alas, for more. 

FTC Scores a Win, Against Itself

Spoiler Alert: Having lost its case against the Illumina-Grail merger before the commission’s own administrative law judge (ALJ), the FTC appealed to itself, found itself convincing, and ordered Illumina to divest Grail. In doing so, the commission mirrored last September’s decision from the European Commission.

I wrote about the case here. I won’t pretend to have evaluated all the facts and circumstances of what’s been, after all, a rule-of-reason case. Still, I’ll note again that this was a vertical acquisition with some obvious efficiencies and a not-so-obvious foreclosure argument. The commission’s press release says that bringing the early-cancer-detection test to market is extremely important, and potentially life-saving. We’re also told that:

Illumina has an enormous financial incentive to ensure that Grail wins the innovation race in the U.S. MCED market. Illumina stands to earn substantially more profit on the sale of GRAIL tests than it does by supporting rival test developers.

So . . . that seems like a pretty good argument on behalf of the merger. Rather than recapitulate the whole thing, I’ll point readers to Alden Abbott’s ToTM discussion earlier this week, another by Thom Lambert. an amicus brief by my International Center for Law & Economics colleagues Geoff Manne and Gus Hurwitz (plus a number of other law & economics scholars), and a thorough critique of the FTC’s case by Bruce Kobayashi (former director of the FTC’s Bureau of Economics) and Tim Muris (former FTC chairman).

But Elsewhere, the Commission Won’t Just Take the Win

One more quick note—this one on the now-abandoned Altria-Juul deal—but first a confession of priors: I hate tobacco and I miss my dad, a long-time heavy smoker who did, indeed, fall victim to lung cancer. Too much information, perhaps.

With that said (or typed), this case wasn’t about cigarettes. Tobacco products are lawful, there’s no shortage of information about tobacco risks, and the FTC is not a health and safety regulator.

There’s a lot about the case that’s complicated, but one issue that remains curious is the FTC’s persistence, given that, notwithstanding the loss before its own ALJ, the commission seems to have gotten more or less everything it sought in its notice of contemplated relief(part of the initial complaint):

  • The transaction has been abandoned;
  • Altria has divested itself of its stake in Juul;
  • The parties have agreed to terminate the various challenged agreements associated with the now-abandoned transaction (including a challenged agreement not to compete, in anticipation of the now-abandoned acquisition);
  • The parties have proposed an enforceable (by the FTC) agreement not to enter into any new transaction in the relevant market without prior approval;
  • The parties have proposed to provide prior notice of any other transactions in the relevant market; and
  • The parties have proposed to provide for outside monitoring, at their own expense, for a period of time.

So why aren’t they taking the win? Khan and Assistant Attorney General Jonathan Kanter seem fond of saying that they’re not scared of losing, but they shouldn’t be scared of winning either, should they?

The FTC’s raft of proposed rulemakings seems to suppose that they can enforce rules and orders, with substantial fines at their disposal, and in this matter, they would have been aided in monitoring by interested third parties in the  industry. So, as the young’uns were asking last evening: why is this night different from all other nights?

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

In a Feb. 14 column in the Wall Street Journal, Commissioner Christine Wilson announced her intent to resign her position on the Federal Trade Commission (FTC). For those curious to know why, she beat you to the punch in the title and subtitle of her column: “Why I’m Resigning as an FTC Commissioner: Lina Khan’s disregard for the rule of law and due process make it impossible for me to continue serving.”

This is the seventh FTC roundup I’ve posted to Truth on the Market since joining the International Center for Law & Economics (ICLE) last September, having left the FTC at the end of August. Relentlessly astute readers of this column may have observed that I cited (and linked to) Commissioner Wilson’s dissents in five of my six previous efforts—actually, to three of them in my Nov. 4 post alone.

As anyone might guess, I’ve linked to Wilson’s dissents (and concurrences, etc.) for the same reason I’ve linked to other sources: I found them instructive in some significant regard. Priors and particular conclusions of law aside, I generally found Wilson’s statements to be well-grounded in established principles of antitrust law and economics. I cannot say the same about statements from the current majority.

Commission dissents are not merely the bases for blog posts or venues for venting. They can provide a valuable window into agency matters for lawmakers and, especially, for the courts. And I would suggest that they serve an important institutional role at the FTC, whatever one thinks of the merits of any specific matter. There’s really no point to having a five-member commission if all its votes are unanimous and all its opinions uniform. Moreover, establishing the realistic possibility of dissent can lend credence to those commission opinions that are unanimous. And even in these fractious times, there are such opinions.     

Wilson did not spring forth fully formed from the forehead of the U.S. Senate. She began her FTC career as a Georgetown student, serving as a law clerk in the Bureau of Competition; she returned some years later to serve as chief of staff to Chairman Tim Muris; and she returned again when confirmed as a commissioner in April 2018 (later sworn in in September 2018). In between stints at the FTC, she gained antitrust experience in private practice, both in law firms and as in-house counsel. I would suggest that her agency experience, combined with her work in the private sector, provided a firm foundation for the judgments required of a commissioner.

Daniel Kaufman, former acting director of the FTC’s Bureau of Consumer Protection, reflected on Wilson’s departure here. Personally, with apologies for the platitude, I would like to thank Commissioner Wilson for her service.  And, not incidentally, for her consistent support for agency staff.

Her three Democratic colleagues on the commission also thanked her for her service, if only collectively, and tersely: “While we often disagreed with Commissioner Wilson, we respect her devotion to her beliefs and are grateful for her public service. We wish her well in her next endeavor.” That was that. No doubt heartfelt. Wilson’s departure column was a stern rebuke to the Commission, so there’s that. But then, stern rebukes fly in all directions nowadays.

While I’ve never been a commissioner, I recall a far nicer and more collegial sendoff when I departed from my lowly staff position. Come to think of it, I had a nicer sendoff when I left a large D.C. law firm as a third-year associate bound for a teaching position, way back when.

So, what else is new?

In January, I noted that “the big news at the FTC is all about noncompetes”; that is, about the FTC’s proposed rule to ban the use of noncompetes more-or-less across the board The rule would cover all occupations and all income levels, with a narrow exception for the sale of the business in which the “employee” has at least a 25% ownership stake (why 25%?), and a brief nod to statutory limits on the commission’s regulatory authority with regard to nonprofits, common carriers, and some other entities.

Colleagues Brian Albrecht (and here), Alden Abbott, Gus Hurwitz, and Corbin K. Barthold also have had things to say about it. I suggested that there were legitimate reasons to be concerned about noncompetes in certain contexts—sometimes on antitrust grounds, and sometimes for other reasons. But certain contexts are far from all contexts, and a mixed and developing body of economic literature, coupled with limited FTC experience in the subject, did not militate in favor of nearly so sweeping a regulatory proposal. This is true even before we ask practical questions about staffing for enforcement or, say, whether the FTC Act conferred the requisite jurisdiction on the agency.

This is the first or second FTC competition rulemaking ever, depending on how one counts, and it is the first this century, in any case. Here’s administrative scholar Thomas Merrill on FTC competition rulemaking. Given the Supreme Court’s recent articulation of the major questions doctrine in West Virginia v. EPA, a more modest and bipartisan proposal might have been far more prudent. A bad turn at the court can lose more than the matter at hand. Comments are due March 20, by the way.

Now comes a missive from the House Judiciary Committee, along with multiple subcommittees, about the noncompete NPRM. The letter opens by stating that “The Proposed Rule exceeds its delegated authority and imposes a top-down one-size-fits-all approach that violates basic American principles of federalism and free markets.” And “[t]he Biden FTC’s proposed rule on non-compete clauses shows the radicalness of the so-called ‘hipster’ antitrust movement that values progressive outcomes over long-held legal and economic principles.”

Ouch. Other than that Mr. Jordan, how did you like the play?

There are several single-spaced pages on the “FTC’s power grab” before the letter gets to a specific, and substantial, formal document request in the service of congressional oversight. That does not stop the rulemaking process, but it does not bode well either.

Part of why this matters is that there’s still solid, empirically grounded, pro-consumer work that’s at risk. In my first Truth on the Market post, I applauded FTC staff comments urging New York State to reject a certificate of public advantage (COPA) application. As I noted there, COPAs are rent-seeking mechanisms chiefly aimed at insulating anticompetitive mergers (and sometimes conduct) from federal antitrust scrutiny. Commission and staff opposition to COPAs was developed across several administrations on well-established competition principles and a significant body of research regarding hospital consolidation, health care prices, and quality of care.

Office of Policy Planning (OPP) Director Elizabeth Wilkins has now announced that the parties in question have abandoned their proposed merger. Wilkins thanks the staff of OPP, the Bureau of Economics, and the Bureau of Competition for their work on the matter, and rightly so. There’s no new-fangled notion of Section 5 or mergers at play. The work has developed over decades and it’s the sort of work that should continue. Notwithstanding numerous (if not legion) departures, good and experienced staff and established methods remain, and ought not to be repudiated, much less put at risk.    

Oh, right, Meta/Within. On Jan. 31, U.S. District Court Judge Edward J. Davila denied FTC’s request for a preliminary injunction blocking Meta’s proposed acquisition of Within. On Feb. 9, the commission announced “that this matter in its entirety be and it hereby is withdrawn from adjudication, and that all proceedings before the Administrative Law Judge be and they hereby are stayed.”

So, what happened? Much ink has been spilled on the weakness of the FTC’s case, both within ToTM (you see what I did there?) and without. ToTM posts by Dirk Auer, Alden Abbott, Gus Hurwitz, Gus again, and I enjoyed no monopoly on skepticism. Ashley Gold called the case “a stretch”; Gary Shapiro, in Fortune, called it “laughable.” And as Gus had pointed out, even the New York Times seemed skeptical.

I won’t recapitulate the much-discussed case, but on the somewhat-less-discussed matter of the withdrawal, I’ll consider why the FTC announced that the matter “is withdrawn from adjudication, and that all proceedings before the Administrative Law Judge be and they hereby are stayed.” While the matter was not litigated to its conclusion in federal court, the substantial and workmanlike opinion denying the preliminary injunction made it clear that the FTC had lost on the facts under both of the theories of harm to potential competition that they’d advanced.

“Having reviewed and considered the objective evidence of Meta’s capabilities and incentives, the Court is not persuaded that this evidence establishes that it was ‘reasonably probable’ Meta would enter the relevant market.”

An appeal in the 9th U.S. Circuit Court of Appeals likely seemed fruitless. Stopping short of a final judgment, the FTC could have tried for a do-over in its internal administrative Part 3 process, and might have fared well before itself, but that would have demanded considerable additional resources in a case that, in the long run, was bound to be a loser. Bloomberg had previously reported that the commission voted to proceed with the case against the merger contra the staff’s recommendation. Here, the commission noted that “Complaint Counsel [the Commission’s own staff] has not registered any objection” to Meta’s motion to withdraw proceedings from adjudication.

There are novel approaches to antitrust. And there are the courts and the law. And, as noted above, many among the staff are well-versed in that law and experienced at investigations. You can’t always get what you want, but if you try sometimes, you get what you deserve.

In the world of video games, the process by which players train themselves or their characters in order to overcome a difficult “boss battle” is called “leveling up.” I find that the phrase also serves as a useful metaphor in the context of corporate mergers. Here, “leveling up” can be thought of as acquiring another firm in order to enter or reinforce one’s presence in an adjacent market where a larger and more successful incumbent is already active.

In video-game terminology, that incumbent would be the “boss.” Acquiring firms choose to level up when they recognize that building internal capacity to compete with the “boss” is too slow, too expensive, or is simply infeasible. An acquisition thus becomes the only way “to beat the boss” (or, at least, to maximize the odds of doing so).

Alas, this behavior is often mischaracterized as a “killer acquisition” or “reverse killer acquisition.” What separates leveling up from killer acquisitions is that the former serve to turn the merged entity into a more powerful competitor, while the latter attempt to weaken competition. In the case of “reverse killer acquisitions,” the assumption is that the acquiring firm would have entered the adjacent market regardless absent the merger, leaving even more firms competing in that market.

In other words, the distinction ultimately boils down to a simple (though hard to answer) question: could both the acquiring and target firms have effectively competed with the “boss” without a merger?

Because they are ubiquitous in the tech sector, these mergers—sometimes also referred to as acquisitions of nascent competitors—have drawn tremendous attention from antitrust authorities and policymakers. All too often, policymakers fail to adequately consider the realistic counterfactual to a merger and mistake leveling up for a killer acquisition. The most recent high-profile example is Meta’s acquisition of the virtual-reality fitness app Within. But in what may be a hopeful sign of a turning of the tide, a federal court appears set to clear that deal over objections from the Federal Trade Commission (FTC).

Some Recent ‘Boss Battles’

The canonical example of leveling up in tech markets is likely Google’s acquisition of Android back in 2005. While Apple had not yet launched the iPhone, it was already clear by 2005 that mobile would become an important way to access the internet (including Google’s search services). Rumors were swirling that Apple, following its tremendously successful iPod, had started developing a phone, and Microsoft had been working on Windows Mobile for a long time.

In short, there was a serious risk that Google would be reliant on a single mobile gatekeeper (i.e., Apple) if it did not move quickly into mobile. Purchasing Android was seen as the best way to do so. (Indeed, averting an analogous sort of threat appears to be driving Meta’s move into virtual reality today.)

The natural next question is whether Google or Android could have succeeded in the mobile market absent the merger. My guess is that the answer is no. In 2005, Google did not produce any consumer hardware. Quickly and successfully making the leap would have been daunting. As for Android:

Google had significant advantages that helped it to make demands from carriers and OEMs that Android would not have been able to make. In other words, Google was uniquely situated to solve the collective action problem stemming from OEMs’ desire to modify Android according to their own idiosyncratic preferences. It used the appeal of its app bundle as leverage to get OEMs and carriers to commit to support Android devices for longer with OS updates. The popularity of its apps meant that OEMs and carriers would have great difficulty in going it alone without them, and so had to engage in some contractual arrangements with Google to sell Android phones that customers wanted. Google was better resourced than Android likely would have been and may have been able to hold out for better terms with a more recognizable and desirable brand name than a hypothetical Google-less Android. In short, though it is of course possible that Android could have succeeded despite the deal having been blocked, it is also plausible that Android became so successful only because of its combination with Google. (citations omitted)

In short, everything suggests that Google’s purchase of Android was a good example of leveling up. Note that much the same could be said about the company’s decision to purchase Fitbit in order to compete against Apple and its Apple Watch (which quickly dominated the market after its launch in 2015).

A more recent example of leveling up is Microsoft’s planned acquisition of Activision Blizzard. In this case, the merger appears to be about improving Microsoft’s competitive position in the platform market for game consoles, rather than in the adjacent market for games.

At the time of writing, Microsoft is staring down the barrel of a gun: Sony is on the cusp of becoming the runaway winner of yet another console generation. Microsoft’s executives appear to have concluded that this is partly due to a lack of exclusive titles on the Xbox platform. Hence, they are seeking to purchase Activision Blizzard, one of the most successful game studios, known among other things for its acclaimed Call of Duty series.

Again, the question is whether Microsoft could challenge Sony by improving its internal game-publishing branch (known as Xbox Game Studios) or whether it needs to acquire a whole new division. This is obviously a hard question to answer, but a cursory glance at the titles shipped by Microsoft’s publishing studio suggest that the issues it faces could not simply be resolved by throwing more money at its existing capacities. Indeed, Microsoft Game Studios seems to be plagued by organizational failings that might only be solved by creating more competition within the Microsoft company. As one gaming journalist summarized:

The current predicament of these titles goes beyond the amount of money invested or the buzzwords used to market them – it’s about Microsoft’s plan to effectively manage its studios. Encouraging independence isn’t an excuse for such a blatantly hands-off approach which allows titles to fester for years in development hell, with some fostering mistreatment to occur. On the surface, it’s just baffling how a company that’s been ranked as one of the top 10 most reputable companies eight times in 11 years (as per RepTrak) could have such problems with its gaming division.

The upshot is that Microsoft appears to have recognized that its own game-development branch is failing, and that acquiring a well-functioning rival is the only way to rapidly compete with Sony. There is thus a strong case to be made that competition authorities and courts should approach the merger with caution, as it has at least the potential to significantly increase competition in the game-console industry.

Finally, leveling up is sometimes a way for smaller firms to try and move faster than incumbents into a burgeoning and promising segment. The best example of this is arguably Meta’s effort to acquire Within, a developer of VR fitness apps. Rather than being an attempt to thwart competition from a competitor in the VR app market, the goal of the merger appears to be to compete with the likes of Google, Apple, and Sony at the platform level. As Mark Zuckerberg wrote back in 2015, when Meta’s VR/AR strategy was still in its infancy:

Our vision is that VR/AR will be the next major computing platform after mobile in about 10 years… The strategic goal is clearest. We are vulnerable on mobile to Google and Apple because they make major mobile platforms. We would like a stronger strategic position in the next wave of computing….

Over the next few years, we’re going to need to make major new investments in apps, platform services, development / graphics and AR. Some of these will be acquisitions and some can be built in house. If we try to build them all in house from scratch, then we risk that several will take too long or fail and put our overall strategy at serious risk. To derisk this, we should acquire some of these pieces from leading companies.

In short, many of the tech mergers that critics portray as killer acquisitions are just as likely to be attempts by firms to compete head-on with incumbents. This “leveling up” is precisely the sort of beneficial outcome that antitrust laws were designed to promote.

Building Products Is Hard

Critics are often quick to apply the “killer acquisition” label to any merger where a large platform is seeking to enter or reinforce its presence in an adjacent market. The preceding paragraphs demonstrate that it’s not that simple, as these mergers often enable firms to improve their competitive position in the adjacent market. For obvious reasons, antitrust authorities and policymakers should be careful not to thwart this competition.

The harder part is how to separate the wheat from the chaff. While I don’t have a definitive answer, an easy first step would be for authorities to more seriously consider the supply side of the equation.

Building a new product is incredibly hard, even for the most successful tech firms. Microsoft famously failed with its Zune music player and Windows Phone. The Google+ social network never gained any traction. Meta’s foray into the cryptocurrency industry was a sobering experience. Amazon’s Fire Phone bombed. Even Apple, which usually epitomizes Silicon Valley firms’ ability to enter new markets, has had its share of dramatic failures: Apple Maps, its Ping social network, and the first Home Pod, to name a few.

To put it differently, policymakers should not assume that internal growth is always a realistic alternative to a merger. Instead, they should carefully examine whether such a strategy is timely, cost-effective, and likely to succeed.

This is obviously a daunting task. Firms will struggle to dispositively show that they need to acquire the target firm in order to effectively compete against an incumbent. The question essentially hinges on the quality of the firm’s existing management, engineers, and capabilities. All of these are difficult—perhaps even impossible—to measure. At the very least, policymakers can improve the odds of reaching a correct decision by approaching these mergers with an open mind.

Under Chair Lina Khan’s tenure, the FTC has opted for the opposite approach and taken a decidedly hostile view of tech acquisitions. The commission sued to block both Meta’s purchase of Within and Microsoft’s acquisition of Activision Blizzard. Likewise, several economists—notably Tommasso Valletti—have called for policymakers to reverse the burden of proof in merger proceedings, and opined that all mergers should be viewed with suspicion because, absent efficiencies, they always reduce competition.

Unfortunately, this skeptical approach is something of a self-fulfilling prophecy: when authorities view mergers with suspicion, they are likely to be dismissive of the benefits discussed above. Mergers will be blocked and entry into adjacent markets will occur via internal growth. 

Large tech companies’ many failed attempts to enter adjacent markets via internal growth suggest that such an outcome would ultimately harm the digital economy. Too many “boss battles” will needlessly be lost, depriving consumers of precious competition and destroying startup companies’ exit strategies.

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

“Just when I thought I was out, they pull me back in!” says Al Pacino’s character, Michael Corleone, in Godfather III. That’s how Facebook and Google must feel about S. 673, the Journalism Competition and Preservation Act (JCPA)

Gus Hurwitz called the bill dead in September. Then it passed the Senate Judiciary Committee. Now, there are some reports that suggest it could be added to the obviously unrelated National Defense Authorization Act (it should be noted that the JCPA was not included in the version of NDAA introduced in the U.S. House).

For an overview of the bill and its flaws, see Dirk Auer and Ben Sperry’s tl;dr. The JCPA would force “covered” online platforms like Facebook and Google to pay for journalism accessed through those platforms. When a user posts a news article on Facebook, which then drives traffic to the news source, Facebook would have to pay. I won’t get paid for links to my banger cat videos, no matter how popular they are, since I’m not a qualifying publication.

I’m going to focus on one aspect of the bill: the use of “final offer arbitration” (FOA) to settle disputes between platforms and news outlets. FOA is sometimes called “baseball arbitration” because it is used for contract disputes in Major League Baseball. This form of arbitration has also been implemented in other jurisdictions to govern similar disputes, notably by the Australian ACCC.

Before getting to the more complicated case, let’s start simple.

Scenario #1: I’m a corn farmer. You’re a granary who buys corn. We’re both invested in this industry, so let’s assume we can’t abandon negotiations in the near term and need to find an agreeable price. In a market, people make offers. Prices vary each year. I decide when to sell my corn based on prevailing market prices and my beliefs about when they will change.

Scenario #2: A government agency comes in (without either of us asking for it) and says the price of corn this year is $6 per bushel. In conventional economics, we call that a price regulation. Unlike a market price, where both sides sign off, regulated prices do not enjoy mutual agreement by the parties to the transaction.

Scenario #3:  Instead of a price imposed independently by regulation, one of the parties (say, the corn farmer) may seek a higher price of $6.50 per bushel and petition the government. The government agrees and the price is set at $6.50. We would still call that price regulation, but the outcome reflects what at least one of the parties wanted and  some may argue that it helps “the little guy.” (Let’s forget that many modern farms are large operations with bargaining power. In our head and in this story, the corn farmer is still a struggling mom-and-pop about to lose their house.)

Scenario #4: Instead of listening only to the corn farmer,  both the farmer and the granary tell the government their “final offer” and the government picks one of those offers, not somewhere in between. The parties don’t give any reasons—just the offer. This is called “final offer arbitration” (FOA). 

As an arbitration mechanism, FOA makes sense, even if it is not always ideal. It avoids some of the issues that can attend “splitting the difference” between the parties. 

While it is better than other systems, it is still a price regulation.  In the JCPA’s case, it would not be imposed immediately; the two parties can negotiate on their own (in the shadow of the imposed FOA). And the actual arbitration decision wouldn’t technically be made by the government, but by a third party. Fine. But ultimately, after stripping away the veneer,  this is all just an elaborate mechanism built atop the threat of the government choosing the price in the market. 

I call that price regulation. The losing party does not like the agreement and never agreed to the overall mechanism. Unlike in voluntary markets, at least one of the parties does not agree with the final price. Moreover, neither party explicitly chose the arbitration mechanism. 

The JCPA’s FOA system is not precisely like the baseball situation. In baseball, there is choice on the front-end. Players and owners agree to the system. In baseball, there is also choice after negotiations start. Players can still strike; owners can enact a lockout. Under the JCPA, the platforms must carry the content. They cannot walk away.

I’m an economist, not a philosopher. The problem with force is not that it is unpleasant. Instead, the issue is that force distorts the knowledge conveyed through market transactions. That distortion prevents resources from moving to their highest valued use. 

How do we know the apple is more valuable to Armen than it is to Ben? In a market, “we” don’t need to know. No benevolent outsider needs to pick the “right” price for other people. In most free markets, a seller posts a price. Buyers just need to decide whether they value it more than that price. Armen voluntarily pays Ben for the apple and Ben accepts the transaction. That’s how we know the apple is in the right hands.

Often, transactions are about more than just price. Sometimes there may be haggling and bargaining, especially on bigger purchases. Workers negotiate wages, even when the ad stipulates a specific wage. Home buyers make offers and negotiate. 

But this just kicks up the issue of information to one more level. Negotiating is costly. That is why sometimes, in anticipation of costly disputes down the road, the two sides voluntarily agree to use an arbitration mechanism. MLB players agree to baseball arbitration. That is the two sides revealing that they believe the costs of disputes outweigh the losses from arbitration. 

Again, each side conveys their beliefs and values by agreeing to the arbitration mechanism. Each step in the negotiation process allows the parties to convey the relevant information. No outsider needs to know “the right” answer.For a choice to convey information about relative values, it needs to be freely chosen.

At an abstract level, any trade has two parts. First, people agree to the mechanism, which determines who makes what kinds of offers. At the grocery store, the mechanism is “seller picks the price and buyer picks the quantity.” For buying and selling a house, the mechanism is “seller posts price, buyer can offer above or below and request other conditions.” After both parties agree to the terms, the mechanism plays out and both sides make or accept offers within the mechanism. 

We need choice on both aspects for the price to capture each side’s private information. 

For example, suppose someone comes up to you with a gun and says “give me your wallet or your watch. Your choice.” When you “choose” your watch, we don’t actually call that a choice, since you didn’t pick the mechanism. We have no way of knowing whether the watch means more to you or to the guy with the gun. 

When the JCPA forces Facebook to negotiate with a local news website and Facebook offers to pay a penny per visit, it conveys no information about the relative value that the news website is generating for Facebook. Facebook may just be worried that the website will ask for two pennies and the arbitrator will pick the higher price. It is equally plausible that in a world without transaction costs, the news would pay Facebook, since Facebook sends traffic to them. Is there any chance the arbitrator will pick Facebook’s offer if it asks to be paid? Of course not, so Facebook will never make that offer. 

For sure, things are imposed on us all the time. That is the nature of regulation. Energy prices are regulated. I’m not against regulation. But we should defend that use of force on its own terms and be honest that the system is one of price regulation. We gain nothing by a verbal sleight of hand that turns losing your watch into a “choice” and the JCPA’s FOA into a “negotiation” between platforms and news.

In economics, we often ask about market failures. In this case, is there a sufficient market failure in the market for links to justify regulation? Is that failure resolved by this imposition?

European Union officials insist that the executive order President Joe Biden signed Oct. 7 to implement a new U.S.-EU data-privacy framework must address European concerns about U.S. agencies’ surveillance practices. Awaited since March, when U.S. and EU officials reached an agreement in principle on a new framework, the order is intended to replace an earlier data-privacy framework that was invalidated in 2020 by the Court of Justice of the European Union (CJEU) in its Schrems II judgment.

This post is the first in what will be a series of entries examining whether the new framework satisfies the requirements of EU law or, as some critics argue, whether it does not. The critics include Max Schrems’ organization NOYB (for “none of your business”), which has announced that it “will likely bring another challenge before the CJEU” if the European Commission officially decides that the new U.S. framework is “adequate.” In this introduction, I will highlight the areas of contention based on NOYB’s “first reaction.”

The overarching legal question that the European Commission (and likely also the CJEU) will need to answer, as spelled out in the Schrems II judgment, is whether the United States “ensures an adequate level of protection for personal data essentially equivalent to that guaranteed in the European Union by the GDPR, read in the light of Articles 7 and 8 of the [EU Charter of Fundamental Rights]” Importantly, as Theodore Christakis, Kenneth Propp, and Peter Swire point out, “adequate level” and “essential equivalence” of protection do not necessarily mean identical protection, either substantively or procedurally. The precise degree of flexibility remains an open question, however, and one that the EU Court may need to clarify to a much greater extent.

Proportionality and Bulk Data Collection

Under Article 52(1) of the EU Charter of Fundamental Rights, restrictions of the right to privacy must meet several conditions. They must be “provided for by law” and “respect the essence” of the right. Moreover, “subject to the principle of proportionality, limitations may be made only if they are necessary” and meet one of the objectives recognized by EU law or “the need to protect the rights and freedoms of others.”

As NOYB has acknowledged, the new executive order supplemented the phrasing “as tailored as possible” present in 2014’s Presidential Policy Directive on Signals Intelligence Activities (PPD-28) with language explicitly drawn from EU law: mentions of the “necessity” and “proportionality” of signals-intelligence activities related to “validated intelligence priorities.” But NOYB counters:

However, despite changing these words, there is no indication that US mass surveillance will change in practice. So-called “bulk surveillance” will continue under the new Executive Order (see Section 2 (c)(ii)) and any data sent to US providers will still end up in programs like PRISM or Upstream, despite of the CJEU declaring US surveillance laws and practices as not “proportionate” (under the European understanding of the word) twice.

It is true that the Schrems II Court held that U.S. law and practices do not “[correlate] to the minimum safeguards resulting, under EU law, from the principle of proportionality.” But it is crucial to note the specific reasons the Court gave for that conclusion. Contrary to what NOYB suggests, the Court did not simply state that bulk collection of data is inherently disproportionate. Instead, the reasons it gave were that “PPD-28 does not grant data subjects actionable rights before the courts against the US authorities” and that, under Executive Order 12333, “access to data in transit to the United States [is possible] without that access being subject to any judicial review.”

CJEU case law does not support the idea that bulk collection of data is inherently disproportionate under EU law; bulk collection may be proportionate, taking into account the procedural safeguards and the magnitude of interests protected in a given case. (For another discussion of safeguards, see the CJEU’s decision in La Quadrature du Net.) Further complicating the legal analysis here is that, as mentioned, it is far from obvious that EU law requires foreign countries offer the same procedural or substantive safeguards that are applicable within the EU.

Effective Redress

The Court’s Schrems II conclusion therefore primarily concerns the effective redress available to EU citizens against potential restrictions of their right to privacy from U.S. intelligence activities. The new two-step system proposed by the Biden executive order includes creation of a Data Protection Review Court (DPRC), which would be an independent review body with power to make binding decisions on U.S. intelligence agencies. In a comment pre-dating the executive order, Max Schrems argued that:

It is hard to see how this new body would fulfill the formal requirements of a court or tribunal under Article 47 CFR, especially when compared to ongoing cases and standards applied within the EU (for example in Poland and Hungary).

This comment raises two distinct issues. First, Schrems seems to suggest that an adequacy decision can only be granted if the available redress mechanism satisfies the requirements of Article 47 of the Charter. But this is a hasty conclusion. The CJEU’s phrasing in Schrems II is more cautious:

…Article 47 of the Charter, which also contributes to the required level of protection in the European Union, compliance with which must be determined by the Commission before it adopts an adequacy decision pursuant to Article 45(1) of the GDPR

In arguing that Article 47 “also contributes to the required level of protection,” the Court is not saying that it determines the required level of protection. This is potentially significant, given that the standard of adequacy is “essential equivalence,” not that it be procedurally and substantively identical. Moreover, the Court did not say that the Commission must determine compliance with Article 47 itself, but with the “required level of protection” (which, again, must be “essentially equivalent”).

Second, there is the related but distinct question of whether the redress mechanism is effective under the applicable standard of “required level of protection.” Christakis, Propp, and Swire offered a helpful analysis suggesting that it is, considering the proposed DPRC’s independence, effective investigative powers,  and authority to issue binding determinations. I will offer a more detailed analysis of this point in future posts.

Finally, NOYB raised a concern that “judgment by ‘Court’ [is] already spelled out in Executive Order.” This concern seems to be based on the view that a decision of the DPRC (“the judgment”) and what the DPRC communicates to the complainant are the same thing. Or in other words, that legal effects of a DPRC decision are exhausted by providing the individual with the neither-confirm-nor-deny statement set out in Section 3 of the executive order. This is clearly incorrect: the DPRC has power to issue binding directions to intelligence agencies. The actual binding determinations of the DPRC are not predetermined by the executive order, only the information to be provided to the complainant is.

What may call for closer consideration are issues of access to information and data. For example, in La Quadrature du Net, the CJEU looked at the difficult problem of notification of persons whose data has been subject to state surveillance, requiring individual notification “only to the extent that and as soon as it is no longer liable to jeopardise” the law-enforcement tasks in question. Given the “essential equivalence” standard applicable to third-country adequacy assessments, however, it does not automatically follow that individual notification is required in that context.

Moreover, it also does not necessarily follow that adequacy requires that EU citizens have a right to access the data processed by foreign government agencies. The fact that there are significant restrictions on rights to information and to access in some EU member states, though not definitive (after all, those countries may be violating EU law), may be instructive for the purposes of assessing the adequacy of data protection in a third country, where EU law requires only “essential equivalence.”

Conclusion

There are difficult questions of EU law that the European Commission will need to address in the process of deciding whether to issue a new adequacy decision for the United States. It is also clear that an affirmative decision from the Commission will be challenged before the CJEU, although the arguments for such a challenge are not yet well-developed. In future posts I will provide more detailed analysis of the pivotal legal questions. My focus will be to engage with the forthcoming legal analyses from Schrems and NOYB and from other careful observers.

Faithful and even occasional readers of this roundup might have noticed a certain temporal discontinuity between the last post and this one. The inimitable Gus Hurwitz has passed the scrivener’s pen to me, a recent refugee from the Federal Trade Commission (FTC), and the roundup is back in business. Any errors going forward are mine. Going back, blame Gus.

Commissioner Noah Phillips departed the FTC last Friday, leaving the Commission down a much-needed advocate for consumer welfare and the antitrust laws as they are, if not as some wish they were. I recommend the reflections posted by Commissioner Christine S. Wilson and my fellow former FTC Attorney Advisor Alex Okuliar. Phillips collaborated with his fellow commissioners on matters grounded in the law and evidence, but he wasn’t shy about crying frolic and detour when appropriate.

The FTC without Noah is a lesser place. Still, while it’s not always obvious, many able people remain at the Commission and some good solid work continues. For example, FTC staff filed comments urging New York State to reject a Certificate of Public Advantage (“COPA”) application submitted by SUNY Upstate Health System and Crouse Medical. The staff’s thorough comments reflect investigation of the proposed merger, recent research, and the FTC’s long experience with COPAs. In brief, the staff identified anticompetitive rent-seeking for what it is. Antitrust exemptions for health-care providers tend to make health care worse, but more expensive. Which is a corollary to the evergreen truth that antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.

More Good News from the Commission

On Sept. 30, a unanimous Commission announced that an independent physician association in New Mexico had settled allegations that it violated a 2005 consent order. The allegations? Roughly 400 physicians—independent competitors—had engaged in price fixing, violating both the 2005 order and the Sherman Act. As the concurring statement of Commissioners Phillips and Wilson put it, the new order “will prevent a group of doctors from allegedly getting together to negotiate… higher incomes for themselves and higher costs for their patients.” Oddly, some have chastised the FTC for bringing the action as anti-labor. But the IPA is a regional “must-have” for health plans and a dominant provider to consumers, including patients, who might face tighter budget constraints than the median physician

Peering over the rims of the rose-colored glasses, my gaze turns to Meta. In July, the FTC sued to block Meta’s proposed acquisition of Within Unlimited (and its virtual-reality exercise app, Supernatural). Gus wrote about it with wonder, noting reports that the staff had recommended against filing, only to be overruled by the chair.

Now comes October and an amended complaint. The amended complaint is even weaker than the opening salvo. Now, the FTC alleges that the acquisition would eliminate potential competition from Meta in a narrower market, VR-dedicated fitness apps, by “eliminating any probability that Meta would enter the market through alternative means absent the Proposed Acquisition, as well as eliminating the likely and actual beneficial influence on existing competition that results from Meta’s current position, poised on the edge of the market.”

So what if Meta were to abandon the deal—as the FTC wants—but not enter on its own? Same effect, but the FTC cannot seriously suggest that Meta has a positive duty to enter the market. Is there a jurisdiction (or a planet) where a decision to delay or abandon entry would be unlawful unilateral conduct? Suppose instead that Meta enters, with virtual-exercise guns blazing, much to the consternation of firms actually in the market, which might complain about it. Then what? Would the Commission cheer or would it allege harm to nascent competition, or perhaps a novel vertical theory? And by the way, how poised is Meta, given no competing product in late-stage development? Would the FTC prefer that Meta buy a different competitor? Should the overworked staff commence Meta’s due diligence?

Potential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established. But this is a nascent market in a large, highly dynamic, and innovative industry. The competitive landscape a few years down the road is anyone’s guess. More speculation: the staff was right all along. For more, see Dirk Auer’s or Geoffrey Manne’s threads on the amended complaint.

When It Rains It Pours Regulations

On Aug. 22, the FTC published an advance notice of proposed rulemaking (ANPR) to consider the potential regulation of “commercial surveillance and data security” under its Section 18 authority. Shortly thereafter, they announced an Oct. 20 open meeting with three more ANPRs on the agenda.

First, on the advance notice: I’m not sure what they mean by “commercial surveillance.” The term doesn’t appear in statutory law, or in prior FTC enforcement actions. It sounds sinister and, surely, it’s an intentional nod to Shoshana Zuboff’s anti-tech polemic “The Age of Surveillance Capitalism.” One thing is plain enough: the proffered definition is as dramatically sweeping as it is hopelessly vague. The Commission seems to be contemplating a general data regulation of some sort, but we don’t know what sort. They don’t say or even sketch a possible rule. That’s a problem for the FTC, because the law demands that the Commission state its regulatory objectives, along with regulatory alternatives under consideration, in the ANPR itself. If they get to an NPRM, they are required to describe a proposed rule with specificity.

What’s clear is that the ANPR takes a dim view of much of the digital economy. And while the Commission has considerable experience in certain sorts of privacy and data security matters, the ANPR hints at a project extending well past that experience. Commissioners Phillips and Wilson dissented for good and overlapping reasons. Here’s a bit from the Phillips dissent:

When adopting regulations, clarity is a virtue. But the only thing clear in the ANPR is a rather dystopic view of modern commerce….I cannot support an ANPR that is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate….It’s a naked power grab.

Be sure to read the bonus material in the Federal Register—supporting statements from Chair Lina Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya, and dissenting statements from Commissioners Phillips and Wilson. Chair Khan breezily states that “the questions we ask in the ANPR and the rules we are empowered to issue may be consequential, but they do not implicate the ‘major questions doctrine.’” She’s probably half right: the questions do not violate the Constitution. But she’s probably half wrong too.

For more, see ICLE’s Oct. 20 panel discussion and the executive summary to our forthcoming comments to the Commission.

But wait, there’s more! There were three additional ANPRs on the Commission’s Oct. 20 agenda. So that’s four and counting. Will there be a proposed rule on non-competes? Gig workers? Stay tuned. For now, note that rules are not self-enforcing, and that the chair has testified to Congress that the Commission is strapped for resources and struggling to keep up with its statutory mission. Are more regulations an odd way to ask Congress for money? Thus far, there’s no proposed rule on gig workers, but there was a Policy Statement on Enforcement Related to Gig Workers.. For more on that story, see Alden Abbott’s TOTM post.

Laws, Like People, Have Their Limits

Read Phillips’s parting dissent in Passport Auto Group, where the Commission combined legitimate allegations with an unhealthy dose of overreach:

The language of the unfairness standard has given the FTC the flexibility to combat new threats to consumers that accompany the development of new industries and technologies. Still, there are limits to the Commission’s unfairness authority. Because this complaint includes an unfairness count that aims to transform Section 5 into an undefined discrimination statute, I respectfully dissent.”

Right. Three cheers for effective enforcement of the focused antidiscrimination laws enacted by Congress by the agencies actually charged to enforce those laws. And to equal protection. And three more, at least, for a little regulatory humility, if we find it.

The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited,  the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”

This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.

Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:

…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.

The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.

Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:

Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.

This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.

  1. It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
  2. By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
  3. Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).

In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.

The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.

A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)

The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.

And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.

Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.

This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.

The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:

What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?

This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.

Do Monopoly Profits Always Exceed Joint Duopoly Profits?

Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.

The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:

I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.

Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:

The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.

Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?

I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.

First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.

For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.

Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either. 

Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.

For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.

In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.

If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.

If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.

The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:

Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.

If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).

One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Geoffrey Manne, Sam Bowman, and Dirk Auer have argued:

Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.

Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.

Many Reasons for Mergers

But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.

If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.

Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law. 

Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.

An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.

In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.