Archives For platforms

The 117th Congress closed out without a floor vote on either of the major pieces of antitrust legislation introduced in both chambers: the American Innovation and Choice Online Act (AICOA) and the Open Apps Market Act (OAMA). But it was evident at yesterday’s hearing of the Senate Judiciary Committee’s antitrust subcommittee that at least some advocates—both in academia and among the committee leadership—hope to raise those bills from the dead.

Of the committee’s five carefully chosen witnesses, only New York University School of Law’s Daniel Francis appeared to appreciate the competitive risks posed by AICOA and OAMA—noting, among other things, that the bills’ failure to distinguish between harm to competition and harm to certain competitors was a critical defect.

Yale School of Management’s Fiona Scott Morton acknowledged that ideal antitrust reforms were not on the table, and appeared open to amendments. But she also suggested that current antitrust standards were deficient and, without much explanation or attention to the bills’ particulars, that AICOA and OAMA were both steps in the right direction.

Subcommittee Chair Amy Klobuchar (D-Minn.), who sponsored AICOA in the last Congress, seems keen to reintroduce it without modification. In her introductory remarks, she lamented the power, wealth (if that’s different), and influence of Big Tech in helping to sink her bill last year.

Apparently, firms targeted by anticompetitive legislation would rather they weren’t. Folks outside the Beltway should sit down for this: it seems those firms hire people to help them explain, to Congress and the public, both the fact that they don’t like the bills and why. The people they hire are called “lobbyists.” It appears that, sometimes, that strategy works or is at least an input into a process that sometimes ends, more or less, as they prefer. Dirty pool, indeed. 

There are, of course, other reasons why AICOA and OAMA might have stalled. Had they been enacted, it’s very likely that they would have chilled innovation, harmed consumers, and provided a level of regulatory discretion that would have been very hard, if not impossible, to dial back. If reintroduced and enacted, the bills would be more likely to “rein in” competition and innovation in the American digital sector and, specifically, targeted tech firms’ ability to deliver innovative products and services to tens of millions of (hitherto very satisfied) consumers.

Our colleagues at the International Center for Law & Economics (ICLE) and its affiliated scholars, among others, have explained why. For a selected bit of self-plagiarism, AICOA and OAMA received considerable attention in our symposium on Antitrust’s Uncertain Future; ICLE’s Dirk Auer had a Truth on the Market post on AICOA; and Lazar Radic wrote a piece on OAMA that’s currently up for a Concurrences award.

To revisit just a few critical points:

  1. AICOA and OAMA both suppose that “self-preferencing” is generally harmful. Not so. A firm might invest in developing a successful platform and ecosystem because it expects to recoup some of that investment through, among other means, preferred treatment for some of its own products. Exercising a measure of control over downstream or adjacent products might drive the platform’s development in the first place (see here and here for some potential advantages). To cite just a few examples from the empirical literature, Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand, not just for Instagram, but for the entire category of photography apps; Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally; and Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expanded the consoles’ installed base. As a result, they increase the potential for independent game developers, even in the face of competition from first-party games.
  2. AICOA and OAMA, in somewhat different ways, favor open systems, interoperability, and/or data portability. All of these have potential advantages but, equally, potential costs or disadvantages. Whether any is procompetitive or anticompetitive depends on particular facts and circumstances. In the abstract, each represents a business model that might well be procompetitive or benign, and that consumers might well favor or disfavor. For example, interoperability has potential benefits and costs, and, as Sam Bowman has observed, those costs sometimes exceed the benefits. For instance, interoperability can be exceedingly costly to implement or maintain, and it can generate vulnerabilities that challenge or undermine data security. Data portability can be handy, but it can also harm the interests of third parties—say, friends willing to be named, or depicted in certain photos on a certain platform, but not just anywhere. And while recent commentary suggests that the absence of “open” systems signals a competition problem, it’s hard to understand why. There are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.
  3. AICOA and OAMA both embody dubious assumptions. For example, underlying AICOA is a supposition that vertical integration is generally (or at least typically) harmful. Critics of established antitrust law can point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. And it is, in fact, possible for vertical mergers or other vertical conduct to harm competition. But that possibility, and the findings of these few studies, are routinely overstated. The weight of the empirical evidence shows that vertical integration tends to be competitively benign. For example, widely acclaimed meta-analysis by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade led them to conclude:

“[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. . . .  We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.”

  1. Network effects and data advantages are not insurmountable, nor even necessarily harmful. Advantages of scope and scale for data sets vary according to the data at issue; the context and analytic sophistication of those with access to the data and application; and are subject to diminishing returns, in any case. Simple measures of market share or other numerical thresholds may signal very little of competitive import. See, e.g., this on the contestable platform paradox; Carl Shapiro on the putative decline of competition and irrelevance of certain metrics; and, more generally, antitrust’s well-grounded and wholesale repudiation of the Structure-Conduct-Performance paradigm.

These points are not new. As we note above, they’ve been made more carefully, and in more detail, before. What’s new is that the failure of AICOA and OAMA to reach floor votes in the last Congress leaves their sponsors, and many of their advocates, unchastened.

Conclusion

At yesterday’s hearing, Sen. Klobuchar noted that nations around the world are adopting regulatory frameworks aimed at “reining in” American digital platforms. True enough, but that’s exactly what AICOA and OAMA promise; they will not foster competition or competitiveness.

Novel industries may pose novel challenges, not least to antitrust. But it does not follow that the EU’s Digital Markets Act (DMA), proposed policies in Australia and the United Kingdom, or AICOA and OAMA represent beneficial, much less optimal, policy reforms. As Francis noted, the central commitments of OAMA and AICOA, like the DMA and other proposals, aim to help certain firms at the expense of other firms and consumers. This is not procompetitive reform; it is rent-seeking by less-successful competitors.

AICOA and OAMA were laid to rest with the 117th Congress. They should be left to rest in peace.

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

In our previous post on Gonzalez v. Google LLC, which will come before the U.S. Supreme Court for oral arguments Feb. 21, Kristian Stout and I argued that, while the U.S. Justice Department (DOJ) got the general analysis right (looking to Roommates.com as the framework for exceptions to the general protections of Section 230), they got the application wrong (saying that algorithmic recommendations should be excepted from immunity).

Now, after reading Google’s brief, as well as the briefs of amici on their side, it is even more clear to me that:

  1. algorithmic recommendations are protected by Section 230 immunity; and
  2. creating an exception for such algorithms would severely damage the internet as we know it.

I address these points in reverse order below.

Google on the Death of the Internet Without Algorithms

The central point that Google makes throughout its brief is that a finding that Section 230’s immunity does not extend to the use of algorithmic recommendations would have potentially catastrophic implications for the internet economy. Google and amici for respondents emphasize the ubiquity of recommendation algorithms:

Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack. The result of these algorithms is unprecedented access to knowledge, from the lifesaving (“how to perform CPR”) to the mundane (“best pizza near me”). Google Search uses algorithms to recommend top search results. YouTube uses algorithms to share everything from cat videos to Heimlich-maneuver tutorials, algebra problem-solving guides, and opera performances. Services from Yelp to Etsy use algorithms to organize millions of user reviews and ratings, fueling global commerce. And individual users “like” and “share” content millions of times every day. – Brief for Respondent Google, LLC at 2.

The “recommendations” they challenge are implicit, based simply on the manner in which YouTube organizes and displays the multitude of third-party content on its site to help users identify content that is of likely interest to them. But it is impossible to operate an online service without “recommending” content in that sense, just as it is impossible to edit an anthology without “recommending” the story that comes first in the volume. Indeed, since the dawn of the internet, virtually every online service—from news, e-commerce, travel, weather, finance, politics, entertainment, cooking, and sports sites, to government, reference, and educational sites, along with search engines—has had to highlight certain content among the thousands or millions of articles, photographs, videos, reviews, or comments it hosts to help users identify what may be most relevant. Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable. As a result, exposing online services to liability for the “recommendations” inherent in those organizational choices would expose them to liability for third-party content virtually all the time. – Amicus Brief for Meta Platforms at 3-4.

In other words, if Section 230 were limited in the way that the plaintiffs (and the DOJ) seek, internet platforms’ ability to offer users useful information would be strongly attenuated, if not completely impaired. The resulting legal exposure would lead inexorably to far less of the kinds of algorithmic recommendations upon which the modern internet is built.

This is, in part, why we weren’t able to fully endorse the DOJ’s brief in our previous post. The DOJ’s brief simply goes too far. It would be unreasonable to establish as a categorical rule that use of the ubiquitous auto-discovery algorithms that power so much of the internet would strip a platform of Section 230 protection. The general rule advanced by the DOJ’s brief would have detrimental and far-ranging implications.

Amici on Publishing and Section 230(f)(4)

Google and the amici also make a strong case that algorithmic recommendations are inseparable from publishing. They have a strong textual hook in Section 230(f)(4), which explicitly protects “enabling tools that… filter, screen, allow, or disallow content; pick, choose, analyze or disallow content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

As the amicus brief from a group of internet-law scholars—including my International Center for Law & Economics colleagues Geoffrey Manne and Gus Hurwitz—put it:

Section 230’s text should decide this case. Section 230(c)(1) immunizes the user or provider of an “interactive computer service” from being “treated as the publisher or speaker” of information “provided by another information content provider.” And, as Section 230(f)’s definitions make clear, Congress understood the term “interactive computer service” to include services that “filter,” “screen,” “pick, choose, analyze,” “display, search, subset, organize,” or “reorganize” third-party content. Automated recommendations perform exactly those functions, and are therefore within the express scope of Section 230’s text. – Amicus Brief of Internet Law Scholars at 3-4.

In other words, Section 230 protects not just the conveyance of information, but how that information is displayed. Algorithmic recommendations are a subset of those display tools that allow users to find what they are looking for with ease. Section 230 can’t be reasonably read to exclude them.

Why This Isn’t Really (Just) a Roommates.com Case

This is where the DOJ’s amicus brief (and our previous analysis) misses the point. This is not strictly a Roomates.com case. The case actually turns on whether algorithmic recommendations are separable from publication of third-party content, rather than whether they are design choices akin to what was occurring in that case.

For instance, in our previous post, we argued that:

[T]he DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did.

While we thought the DOJ went too far in differentiating algorithmic recommendations from other uses of algorithms, we gave them too much credit in applying the Roomates.com analysis. Section 230 was meant to immunize filtering tools, so long as the information provided is from third parties. Algorithmic recommendations—like the type at issue with YouTube’s “Up Next” feature—are less like the conduct in Roommates.com and much more like a search engine.

The DOJ did, however, have a point regarding algorithmic tools in that they may—like any other tool a platform might use—be employed in a way that transforms the automated promotion into a direct endorsement or original publication. For instance, it’s possible to use algorithms to intentionally amplify certain kinds of content in such a way as to cultivate more of that content.

That’s, after all, what was at the heart of Roommates.com. The site was designed to elicit responses from users that violated the law. Algorithms can do that, but as we observed previously, and as the many amici in Gonzalez observe, there is nothing inherent to the operation of algorithms that match users with content that makes their use categorically incompatible with Section 230’s protections.

Conclusion

After looking at the textual and policy arguments forwarded by both sides in Gonzalez, it appears that Google and amici for respondents have the better of it. As several amici argued, to the extent there are good reasons to reform Section 230, Congress should take the lead. The Supreme Court shouldn’t take this case as an opportunity to significantly change the consensus of the appellate courts on the broad protections of Section 230 immunity.

Later next month, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google LLC, a case that has drawn significant attention and many bad takes regarding how Section 230 of the Communications Decency Act should be interpreted. Enacted in the mid-1990s, when the Internet as we know it was still in its infancy, Section 230 has grown into a law that offers online platforms a fairly comprehensive shield against liability for the content that third parties post to their services. But the law has also come increasingly under fire, from both the political left and the right. 

At issue in Gonzalez is whether Section 230(c)(1) immunizes Google from a set of claims brought under the Antiterrorism Act of 1990 (ATA). The petitioners are relatives of Nohemi Gonzalez, an American citizen murdered in a 2015 terrorist attack in Paris. They allege that Google, through YouTube, is liable under the ATA for providing assistance to ISIS for four main reasons. They allege that: 

  1. Google allowed ISIS to use YouTube to disseminate videos and messages, thereby recruiting and radicalizing terrorists responsible for the murder.
  2. Google failed to take adequate steps to take down videos and accounts and keep them down.
  3. Google recommends videos of others, both through subscriptions and algorithms.
  4. Google monetizes this content through its AdSense service, with ISIS-affiliated users receiving revenue. 

The 9th U.S. Circuit Court of Appeals dismissed all of the non-revenue-sharing claims as barred by Section 230(c)(1), but allowed the revenue-sharing claim to go forward. 

Highlights of DOJ’s Brief

In an amicus brief, the U.S. Justice Department (DOJ) ultimately asks the Court to vacate the 9th Circuit’s judgment regarding those claims that are based on YouTube’s alleged targeted recommendations of ISIS content. But the DOJ also rejects much of the petitioner’s brief, arguing that Section 230 does rightfully apply to the rest of the claims. 

The crux of the DOJ’s brief concerns when and how design choices can be outside of Section 230 immunity. The lodestar 9th Circuit case that the DOJ brief applies is 2008’s Fair Housing Council of San Fernando Valley v. Roommates.com.

As the DOJ notes, radical theories advanced by the plaintiffs and other amici would go too far in restricting Section 230 immunity based on a platform’s decisions on whether or not to block or remove user content (see, e.g., its discussion on pp. 17-21 of the merits and demerits of Justice Clarence Thomas’s Malwarebytes concurrence).  

At the same time, the DOJ’s brief notes that there is room for a reasonable interpretation of Section 230 that allows for liability to attach when online platforms behave unreasonably in their promotion of users’ content. Applying essentially the 9th Circuit’s Roommates.com standard, the DOJ argues that YouTube’s choice to amplify certain terrorist content through its recommendations algorithm is a design choice, rather than simply the hosting of third-party content, thereby removing it from the scope of  Section 230 immunity.  

While there is much to be said in favor of this approach, it’s important to point out that, although directionally correct, it’s not at all clear that a Roommates.com analysis should ultimately come down as the DOJ recommends in Gonzalez. More broadly, the way the DOJ structures its analysis has important implications for how we should think about the scope of Section 230 reform that attempts to balance accountability for intermediaries with avoiding undue collateral censorship.

Charting a Middle Course on Immunity

The important point on which the DOJ relies from Roommates.com is that intermediaries can be held accountable when their own conduct creates violations of the law, even if it involves third–party content. As the DOJ brief puts it:

Section 230(c)(1) protects an online platform from claims premised on its dissemination of third-party speech, but the statute does not immunize a platform’s other conduct, even if that conduct involves the solicitation or presentation of third-party content. The Ninth Circuit’s Roommates.com decision illustrates the point in the context of a website offering a roommate-matching service… As a condition of using the service, Roommates.com “require[d] each subscriber to disclose his sex, sexual orientation and whether he would bring children to a household,” and to “describe his preferences in roommates with respect to the same three criteria.” Ibid. The plaintiffs alleged that asking those questions violated housing-discrimination laws, and the court of appeals agreed that Section 230(c)(1) did not shield Roommates.com from liability for its “own acts” of “posting the questionnaire and requiring answers to it.” Id. at 1165.

Imposing liability in such circumstances does not treat online platforms as the publishers or speakers of content provided by others. Nor does it obligate them to monitor their platforms to detect objectionable postings, or compel them to choose between “suppressing controversial speech or sustaining prohibitive liability.”… Illustrating that distinction, the Roommates.com court held that although Section 230(c)(1) did not apply to the website’s discriminatory questions, it did shield the website from liability for any discriminatory third-party content that users unilaterally chose to post on the site’s “generic” “Additional Comments” section…

The DOJ proceeds from this basis to analyze what it would take for Google (via YouTube) to no longer benefit from Section 230 immunity by virtue of its own editorial actions, as opposed to its actions as a publisher (which 230 would still protect). For instance, are the algorithmic suggestions of videos simply neutral tools that allow for users to get more of the content they desire, akin to search results? Or are the algorithmic suggestions of new videos a design choice that makes it akin to Roommates?

The DOJ argues that taking steps to better display pre-existing content is not content development or creation, in and of itself. Similarly, it would be a mistake to make intermediaries liable for creating tools that can then be deployed by users:

Interactive websites invariably provide tools that enable users to create, and other users to find and engage with, information. A chatroom might supply topic headings to organize posts; a photo-sharing site might offer a feature for users to signal that they like or dislike a post; a classifieds website might enable users to add photos or maps to their listings. If such features rendered the website a co-developer of all users’ content, Section 230(c)(1) would be a dead letter.

At a high level, this is correct. Unfortunately, the DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did. Toward this end, the brief notes that:

The distinction between a recommendation and the recommended content is particularly clear when the recommendation is explicit. If YouTube had placed a selected ISIS video on a user’s homepage alongside a message stating, “You should watch this,” that message would fall outside Section 230(c)(1). Encouraging a user to watch a selected video is conduct distinct from the video’s publication (i.e., hosting). And while YouTube would be the “publisher” of the recommendation message itself, that message would not be “information provided by another information content provider.” 47 U.S.C. 230(c)(1).

An Absence of Immunity Does Not Mean a Presence of Liability

Importantly, the DOJ brief emphasizes throughout that remanding the ATA claims is not the end of the analysis—i.e., it does not mean that the plaintiffs can prove the elements. Moreover, other background law—notably, the First Amendment—can limit the application of liability to intermediaries, as well. As we put it in our paper on Section 230 reform:

It is important to again note that our reasonableness proposal doesn’t change the fact that the underlying elements in any cause of action still need to be proven. It is those underlying laws, whether civil or criminal, that would possibly hold intermediaries liable without Section 230 immunity. Thus, for example, those who complain that FOSTA/SESTA harmed sex workers by foreclosing a safe way for them to transact (illegal) business should really be focused on the underlying laws that make sex work illegal, not the exception to Section 230 immunity that FOSTA/SESTA represents. By the same token, those who assert that Section 230 improperly immunizes “conservative bias” or “misinformation” fail to recognize that, because neither of those is actually illegal (nor could they be under First Amendment law), Section 230 offers no additional immunity from liability for such conduct: There is no underlying liability from which to provide immunity in the first place.

There’s a strong likelihood that, on remand, the court will find there is no violation of the ATA at all. Section 230 immunity need not be stretched beyond all reasonable limits to protect intermediaries from hypothetical harms when underlying laws often don’t apply. 

Conclusion

To date, the contours of Section 230 reform largely have been determined by how courts interpret the statute. There is an emerging consensus that some courts have gone too far in extending Section 230 immunity to intermediaries. The DOJ’s brief is directionally correct, but the Court should not adopt it wholesale. More needs to be done to ensure that the particular facts of Gonzalez are not used to completely gut Section 230 more generally.  

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

Price-parity clauses have, until recently, been little discussed in the academic vertical-price-restraints literature. Their growing importance, however, cannot be ignored, and common misconceptions around their use and implementation need to be addressed. While similar in nature to both resale price maintenance and most-favored-nations clauses, the special vertical relationship between sellers and the platform inherent in price-parity clauses leads to distinct economic outcomes. Additionally, with a growing number of lawsuits targeting their use in online platform economies, it is critical to fully understand the economic incentives and outcomes stemming from price-parity clauses. 

Vertical price restraints—of which resale price maintenance (RPM) and most favored nation clauses (MFN) are among many—are both common in business and widely discussed in the academic literature. While there remains a healthy debate among academics as to the true competitive effects of these contractual arrangements, the state of U.S. jurisprudence is clear. Since the Supreme Court’s Leegin and State Oil decisions, the use of RPM is not presumed anticompetitive. Their procompetitive and anticompetitive effects must instead be assessed under a “rule of reason” framework in order to determine their legality under antitrust law. The competitive effects of MFN are also generally analyzed under the rule of reason.

Distinct from these two types of clauses, however, are price-parity clauses (PPCs). A PPC is an agreement between a platform and an independent seller under which the seller agrees to offer their goods on the platform for their lowest advertised price. While sometimes termed “platform MFNs,” the economic effects of PPCs on modern online-commerce platforms are distinct.

This commentary seeks to fill a hole in the PPC literature left by its current focus on producers that sell exclusively nonfungible products on various platforms. That literature generally finds that a PPC reduces price competition between platforms. This finding, however, is not universal. Notably absent from the discussion is any concept of multiple sellers of the same good on the same platform. Correctly accounting for this oversight leads to the conclusion that PPCs generally are both efficient and procompetitive.

Introduction

In a pair of lawsuits filed in California and the District of Columbia, Amazon has come under fire for its restrictions around pricing. These suits allege that Amazon’s restrictive PPCs harm consumers, arguing that sellers are penalized when the price for their good on Amazon is higher than on alternative platforms. They go on to claim that these provisions harm sellers, prevent platform competition, and ultimately force consumers to pay higher prices. The true competitive result of these provisions, however, is unclear.

That literature that does exist on the effects these provisions have on the competitive outcomes of platforms in online marketplaces falls fundamentally short. Jonathan Baker and Fiona Scott Morton (among others) fail to differentiate between PPCs and MFN clauses. This distinction is important because, while the impacts on consumers may be similar, the mechanisms by which the interaction occurs is not. An MFN provision stipulates that a supplier—when working with several distributors—must offer its goods to one particular distributor at terms that are better or equal to those offered to all other distributors.

PPCs, on the other hand, are agreements between sellers and platforms to ensure that the platform’s buyers have access to goods at better or equal terms as those offered the same buyers on other platforms. Sellers that are bound by a PPC and that intend to sell on multiple platforms will have to price uniformly across all platforms to satisfy the PPC. PPCs are contracts between sellers and platforms to define conduct between sellers and buyers. They do not determine conduct between sellers and the platform.

A common characteristic of MFN and PPC arrangements is that consumers are often unaware of the existence of either clause. What is not common, however, is the outcomes that stem from their use. An MFN clause only dictates the terms under which a good is sold to a distributor and does not constrain the interaction between distributors and consumers. While the lower prices realized by a distributor may be passed on as lower prices for the consumer, this is not universally true. A PPC clause, on the other hand, constrains the interactions between sellers and consumers, necessitating that the seller’s price on any given platform, by definition, must be as low as the price on all other platforms. This leads to the lowest prices for a given good in a market.

Intra-Platform Competition

The fundamental oversight in the literature is any discussion of intra-platform competition in the market for fungible goods, within which multiple sellers sell the same good on multiple platforms. Up to this point, all the discussion surrounding PPCs has centered on the Booking.com case in the European Union.

In Booking.com, the primary platform, Booking.com, instituted price-parity clauses with sellers of hotel rooms on its platform, mandating that they sell rooms on Booking.com for equal to or less than the price on all other platforms. This pricing restriction extended to the hotel’s first-party website as well.

In this case, it was alleged that consumers were worse off because the PPC unambiguously increased prices for hotel rooms. This is because, even if the hotel was willing to offer a lower price on its own website, it was unable to do so due to the PPC. This potential lower price would come about due to the low (possibly zero cost) commission a hotel must pay to sell on its own website. On the hotel’s own website, the room could be discounted by as much as the size of the commission that Booking.com took as a percentage of each sale. Further, if a competing platform chose to charge a lower commission than Booking.com, the discount could be the difference in commission rates.

While one other case, E-book MFN, is tangentially relevant, Booking.com is the only case where independent third-party sellers list a good or service for sale on a platform that imposes a PPC. While there is some evidence of harm in the market for the online booking of hotel rooms, however, hotel-room bookings are not analogous to platform-based sales of fungible goods. Sellers of hotel rooms are unable to compete to sell the same room; they can sell similarly situated, easily substitutable rooms, but the rooms are still non-fungible.

In online commerce, however, sellers regularly sell fungible goods. From lip balm and batteries to jeans and air filters, a seller of goods on an e-commerce site is among many similarly situated sellers selling nearly (or perfectly) identical products. These sellers not only have to compete with goods that are close substitutes to the good they are selling, but also with other sellers that offer an identical product.

Therefore, the conclusions found by critics of Booking.com’s PPC do not hold when removing the non-fungibility assumption. While there is some evidence that PPCs may reduce competition among platforms on the margin, there is no evidence that competition among sellers on a given platform is reduced. In fact, the PPC may increase competition by forcing all sellers on a platform to play by the same pricing rules.

We will delve into the competitive environment under a strict PPC—whereby sellers are banned from the platform when found to be in violation of the clause—and introduce the novel (and more realistic) implicit PPC, whereby sellers have incentive to comply with the PPC, but are not punished for deviation. First, however, we must understand the incentives of a seller not bound by a PPC.

Competition by sellers not bound by price-parity clauses

An individual seller in this market chooses to sell identical products at different prices across different platforms, given that the platforms may choose various levels of commission per sale. To sell the highest number of units possible, there is an incentive for sellers to steer customers to platforms that charge the lowest commission, and thereby offer the seller the most revenue possible.

Since the platforms understand the incentive to steer consumers toward low-commission platforms to increase the seller’s revenue, they may not allocate resources toward additional perks, such as free shipping. Platforms may instead compete vigorously to reduce costs in order offer the lowest commissions possible. In the long run, this race to the bottom might leave the market with one dominant and ultra-efficient naturally monopolistic platform that offers the lowest possible commission.

While this sounds excellent for consumers, since they get the lowest possible prices on all goods, this simple scenario does not incorporate non-price factors into the equation. Free shipping, handling, and physical processing; payment processing; and the time spent waiting for the good to arrive are all additional considerations that consumers factor into the equation. For a higher commission, often on the seller side, platforms may offer a number of these perks that increase consumer welfare by a greater amount than the price increase often associated with higher commissions.

In this scenario, because of the under-allocation of resources to platform efficiency, a unified logistics market may not emerge, where buyers are able to search and purchase a good; sellers are able to sell the good; and the platform is able to facilitate the shipping, processing, and handling. By fragmenting these markets—due to the inefficient allocation of capital—consumer welfare is not maximized. And while the raw price of a good is minimized, the total price of the transaction is not.

Competition by sellers bound by strict price-parity clauses

In this scenario, each platform will have some version of a PPC. When the strict PPC is enforced, a seller is restricted from selling on that platform when they are found to have broken parity. Sellers choose the platforms on which they want to sell based on which platform may generate the greatest return; they then set a single price for all platforms. The seller might then make higher returns on platforms with lower commissions and lower returns on platforms with higher commissions. Fundamentally, to sell on a platform, the seller must at least cover its marginal cost.

Due to the potential of being banned for breaking parity, sellers may have an incentive to price so low that, on some platforms, they do not turn a profit (due to high commissions) while compensating for those losses with profits earned on other platforms with lower commissions. Alternatively, sellers may choose to forgo sales on a given platform altogether if the marginal cost associated with selling on the platform under parity is too great.

For a seller to continue to sell on a platform, or to decide to sell on an additional platform, the marginal revenue associated with selling on that platform must outweigh the marginal cost. In effect, even if the commission is so high that the seller merely breaks even, it is still in the seller’s best interest to continue on the platform; only if the seller is losing money by selling on the platform is it economically rational to exit.

Within the boundaries of the platform, sellers bound by a PPC have a strong incentive to vigorously compete. Additionally, they have an incentive to compete vigorously across platforms to generate the highest possible revenue and offset any losses from high-commission platforms.

Platforms have an incentive to vigorously compete to attract buyers and sellers by offering various incentives and additional services to increase the quality of a sale. Examples of such “add-ons” include fulfilment and processing undertaken by the platform, expedited shipping and insured shipping, and authentication services and warranties.

Platforms also have an incentive to find the correct level of commission based on the add-on services that they provide. A platform that wants to offer the lowest possible prices might provide no or few add-ons and charge a low commission. Alternatively, the platform that wants to provide the highest possible quality may charge a high commission in exchange for many add-ons.

As the value that platforms can offer buyers and sellers increases, and as sellers lower their prices to maintain or increase sales, the quality bestowed upon consumers is likely to rise. Competition within the platform, however, may decline. Highly efficient sellers (those with the lowest marginal cost) may use strict PPCs—under which sellers are removed from the platform for breaking parity—to price less-efficient sellers out of the market. Additionally, efficient platforms may be able to price less-efficient platforms out of the market by offering better add-ons, starving the platforms of buyers and sellers in the long run.

Even with the existence of marginally higher prices and lower competition in the marketplace compared to a world without price parity, the marginal benefit for the consumer is likely higher. This is because the add-on services used by platforms to entice buyers and sellers to transact on a given platform, over time, cost less to provide than the benefit they bestow. Regardless of whether every single consumer realizes the full value of such added benefits, the likely result is a level of consumer welfare that is greater under price parity than in its absence.

Implicit price parity: The case of Amazon

Amazon’s price-parity-policy conditions access to some seller perks on the adherence to parity, guiding sellers toward a unified pricing scheme.  The term best suited for this type of policy is an “implicit price parity clause” (IPPC). Under this system, the incentive structure rewards sellers for pricing competitively on Amazon, without punishing alternative pricing measures. For example, if a seller sets prices higher on Amazon because it charges higher commissions than other platforms, that seller will not eligible for Amazon’s Buy Box. But they are still able to sell, market, and promote their own product on the platform. They still show up in the “other sellers” dropdown section of the product page, and consumers can choose that seller with little more than a scroll and an additional click.

While the remainder of this analysis focuses on the specific policies found on Amazon’s platform, IPPCs are found on other platforms, as well. Walmart’s marketplace contains a similar parity policy along with a similarly functioning “buy” box. eBay, too, offers a “best price guarantee,” through which the site offers match the price plus 10% of a qualified competitor within 48 hours. While this policy is not identical in nature, it is in result: prices that are identical for identical goods across multiple platforms.

Amazon’s policy may sound as if it is picking winners and losers on its platform, a system that might appear ripe for corruption and unjustified self-preferencing. But there are several reasons to believe this is not the case. Amazon has built a reputation of low prices, quick delivery, and a high level of customer service. This reputation provides the company an incentive to ensure a consistently high level of quality over time. As Amazon increases the number of products and services offered on its platform, it also needs to devise ways to ensure that its promise of low prices and outstanding service is maintained.

This is where the Buy Box comes in to play. All sellers on the platform can sell without utilizing the Buy Box. These transactions occur either on the seller’s own storefront, or by utilizing the “other sellers” portion of the purchase page for a given good. Amazon’s PPC does not affect the way that these sales occur. Additionally, the seller is free in this type of transaction to sell at whatever price it desires. This includes severely under- or overpricing the competition, as well as breaking price parity. Amazon’s policies do not directly determine prices.

The benefit of the Buy Box—and the reason that an IPPC can be so effective for buyers, sellers, and the platform—is that it both increases competition and decreases search costs. For sellers, there is a strong incentive to compete vigorously on price, since that should give them the best opportunity to sell through the Buy Box. Because the Buy Box is algorithmically driven—factoring in price parity, as well as a few other quality-centered metrics (reviews, shipping cost and speed, etc.)—the featured Buy Box seller can change multiple times per day.

Relative prices between sellers are not the only important factor in winning the Buy Box; absolute prices also play a role. For some products—where there are a limited number of sellers and none are observing parity or they are pricing far above sellers on other platforms—the Buy Box is not displayed at all. This forces consumers to make a deliberate choice to buy from a specific seller as opposed to from a preselected seller. In effect, the Buy Box’s omission removes Amazon’s endorsement of the seller’s practices, while still allowing the seller to offer goods on the platform.

For consumers, this vigorous price competition leads to significantly lower prices with a high level of service. When a consumer uses the Buy Box (as opposed to buying directly from a given seller), Amazon is offering an assurance that the price, shipping, cost, speed, and service associated with that seller and that good is the best of all possible options. Amazon is so confident with its  algorithm that the assurance is backed up with a price guarantee; Amazon will match the price of relevant competitors and, until 2021, would foot the bill for any price drops that happened within seven days of purchase.

For Amazon, this commitment to low prices, high volume, and quality service leads to a sustained strong reputation. Since Amazon has an incentive to attract as many buyers and sellers as possible, to maximize its revenue through commissions on sales and advertising, the platform needs to carefully curate an environment that is conducive to repeated interactions. Buyers and sellers come together on the platform knowing that they are going to face the lowest prices, highest revenues, and highest level of service, because Amazon’s implicit price-parity clause (among other policies) aligns incentives in just the right way to optimize competition.

Conclusion

In some ways, an implicit price-parity clause is the Goldilocks of vertical price restraints.

Without a price-parity clause, there is little incentive to invest in the platform. Yes, there are low prices, but a race to the bottom may tend to lead to a single monopolistic platform. Additionally, consumer welfare is not maximized, since there are no services provided at an efficient level to bring additional value to buyers and sellers, leading to higher quality-adjusted prices. 

Under a strict price-parity clause, there is a strong incentive to invest in the platform, but the nature of removing selling rights due to a violation can lead to reduced price competition. While the quality of service under this system may be higher, the quality-adjusted price may remain high, since there are lower levels of competition putting downward pressure on prices.

An implicit price-parity clause takes the best aspects of both no PPC and strict PPC policies but removes the worst. Sellers are free to set prices as they wish but have incentive to comply with the policy due to the additional benefits they may receive from the Buy Box. The platform has sufficient protection from free riding due to the revocation of certain services, leading to high levels of investment in efficient services that increase quality and decrease quality-adjusted prices. Finally, consumers benefit from the vigorous price competition for the Buy Box, leading to both lower prices and higher quality-adjusted prices when accounting for the efficient shipping and fulfilment undertaken by the platform.

Current attempts to find an antitrust violation associated with PPCs—both implicit and otherwise—are likely misplaced. Any evidence gathered on the market will probably show an increase in consumer welfare. The reduced search costs on the platforms alone could outweigh any alleged increase in price, not to mention the time costs associated with rapid processing and shipping.

Further, while there are many claims that PPC policies—and high commissions on sales—harm sellers, the alternative is even worse. The only credible counterfactual, given the widespread permeation of PPC policies, is that all sellers on the Internet only sell through their own website. Not only would this increase the cost for small businesses by a significant margin, but it would also likely drive many out of business. For sellers, the benefit of a platform is access to a multitude (in some cases, hundreds of millions) of potential consumers. To reach that number of consumers on its own, every single independent seller would have to employ a team of marketers that rivals a Fortune 500 company. Unfortunately, the value proposition is not on its side, and until it is, platforms are the only viable option.

Before labeling a specific contractual obligation as harmful and anticompetitive, we need to understand how it works in the real world. To this point, there has been insufficient discussion about the intra-platform competition that occurs because of price-parity clauses, and the potential consumer-welfare benefits associated with implicit price-parity clauses. Ideally, courts, regulators, and policymakers will take the time going forward to think deeply about the costs and benefits associated with the clauses and choose the least harmful approach to enforcement.

Ultimately, consumers are the ones who stand to lose the most as a result of overenforcement. As always, enforcers should keep in mind that it is the welfare of consumers, not competitors or platforms, that is the overarching concern of antitrust.

The business press generally describes the gig economy that has sprung up around digital platforms like Uber and TaskRabbit as a beneficial phenomenon, “a glass that is almost full.” The gig economy “is an economy that operates flexibly, involving the exchange of labor and resources through digital platforms that actively facilitate buyer and seller matching.”

From the perspective of businesses, major positive attributes of the gig economy include cost-effectiveness (minimizing costs and expenses); labor-force efficiencies (“directly matching the company to the freelancer”); and flexible output production (individualized work schedules and enhanced employee motivation). Workers also benefit through greater independence, enhanced work flexibility (including hours worked), and the ability to earn extra income.

While there are some disadvantages, as well, (worker-commitment questions, business-ethics issues, lack of worker benefits, limited coverage of personal expenses, and worker isolation), there is no question that the gig economy has contributed substantially to the growth and flexibility of the American economy—a major social good. Indeed, “[i]t is undeniable that the gig economy has become an integral part of the American workforce, a trend that has only been accelerated during the” COVID-19 pandemic.

In marked contrast, however, the Federal Trade Commission’s (FTC) Sept. 15 Policy Statement on Enforcement Related to Gig Work (“gig statement” or “statement”) is the story of a glass that is almost empty. The accompanying press release declaring “FTC to Crack Down on Companies Taking Advantage of Gig Workers” (since when is “taking advantage of workers” an antitrust or consumer-protection offense?) puts an entirely negative spin on the gig economy. And while the gig statement begins by describing the nature and large size of the gig economy, it does so in a dispassionate and bland tone. No mention is made of the substantial benefits for consumers, workers, and the overall economy stemming from gig work. Rather, the gig statement quickly adopts a critical perspective in describing the market for gig workers and then addressing gig-related FTC-enforcement priorities. What’s more, the statement deals in very broad generalities and eschews specifics, rendering it of no real use to gig businesses seeking practical guidance.

Most significantly, the gig statement suggests that the FTC should play a significant enforcement role in gig-industry labor questions that fall outside its statutory authority. As such, the statement is fatally flawed as a policy document. It provides no true guidance and should be substantially rewritten or withdrawn.

Gig Statement Analysis

The gig statement’s substantive analysis begins with a negative assessment of gig-firm conduct. It expresses concern that gig workers are being misclassified as independent contractors and are thus deprived “of critical rights [right to organize, overtime pay, health and safety protections] to which they are entitled under law.” Relatedly, gig workers are said to be “saddled with inordinate risks.” Gig firms also “may use transparent algorithms to capture more revenue from customer payments for workers’ services than customers or workers understand.”

Heaven forfend!

The solution offered by the gig statement is “scrutiny of promises gig platforms make, or information they fail to disclose, about the financial proposition of gig work.” No mention is made of how these promises supposedly made to workers about the financial ramifications of gig employment are related to the FTC’s statutory mission (which centers on unfair or deceptive acts or practices affecting consumers or unfair methods of competition).

The gig statement next complains that a “power imbalance” between gig companies and gig workers “may leave gig workers exposed to harms from unfair, deceptive, and anticompetitive practices and is likely to amplify such harms when they occur. “Power imbalance” along a vertical chain has not been a source of serious antitrust concern for decades (and even in the case of the Robinson-Patman Act, the U.S. Supreme Court most recently stressed, in 2005’s Volvo v. Reeder, that harm to interbrand competition is the key concern). “Power imbalances” between workers and employers bear no necessary relation to consumer welfare promotion, which the Supreme Court teaches is the raison d’etre of antitrust. Moreover, the FTC does not explain why unfair or deceptive conduct likely follows from the mere existence of substantial bargaining power. Such an unsupported assertion is not worthy of being included in a serious agency-policy document.

The gig statement then engages in more idle speculation about a supposed relationship between market concentration and the proliferation of unfair and deceptive practices across the gig economy. The statement claims, without any substantiation, that gig companies in concentrated platform markets will be incentivized to exert anticompetitive market power over gig workers, and thereby “suppress wages below competitive rates, reduce job quality, or impose onerous terms on gig workers.” Relatedly, “unfair and deceptive practices by one platform can proliferate across the labor market, creating a race to the bottom that participants in the gig economy, and especially gig workers, have little ability to avoid.” No empirical or theoretical support is advanced for any of these bald assertions, which give the strong impression that the commission plans to target gig-economy companies for enforcement actions without regard to the actual facts on the ground. (By contrast, the commission has in the past developed detailed factual records of competitive and/or consumer-protection problems in health care and other important industry sectors as a prelude to possible future investigations.)

The statement then launches into a description of the FTC’s gig-economy policy priorities. It notes first that “workers may be deprived of the protections of an employment relationship” when gig firms classify them as independent contractors, leading to firms’ “disclosing [of] pay and costs in an unfair and deceptive manner.” What’s more, the FTC “also recognizes that misleading claims [made to workers] about the costs and benefits of gig work can impair fair competition among companies in the gig economy and elsewhere.”

These extraordinary statements seem to be saying that the FTC plans to closely scrutinize gig-economy-labor contract negotiations, based on its distaste for independent contracting (which it believes should be supplanted by employer-employee relationships, a question of labor law, not FTC law). Nowhere is it explained where such a novel FTC exercise of authority comes from, nor how such FTC actions have any bearing on harms to consumer welfare. The FTC’s apparent desire to force employment relationships upon gig firms is far removed from harm to competition or unfair or deceptive practices directed at consumers. Without more of an explanation, one is left to conclude that the FTC is proposing to take actions that are far beyond its statutory remit.

The gig statement next tries to tie the FTC’s new gig program to violations of the FTC Act (“unsubstantiated claims”); the FTC’s Franchise Rule; and the FTC’s Business Opportunity Rule, violations of which “can trigger civil penalties.” The statement, however, lacks any sort of logical, coherent explanation of how the new enforcement program necessarily follows from these other sources of authority. While a few examples of rules-based enforcement actions that have some connection to certain terms of employment may be pointed to, such special cases are a far cry from any sort of general justification for turning the FTC into a labor-contracts regulator.

The statement then moves on to the alleged misuse of algorithmic tools dealing with gig-worker contracts and supervision that may lead to unlawful gig-worker oversight and termination. Once again, the connection of any of this to consumer-welfare harm (from a competition or consumer-protection perspective) is not made.

The statement further asserts that FTC Act consumer-protection violations may arise from “nonnegotiable” and other unfair contracts. In support of such a novel exercise of authority, however, the FTC cites supposedly analogous “unfair” clauses found in consumer contracts with individuals or small-business consumers. It is highly doubtful that these precedents support any FTC enforcement actions involving labor contracts.

Noncompete clauses with individuals are next on the gig statement’s agenda. It is claimed that “[n]on-compete provisions may undermine free and fair labor markets by restricting workers’ ability to obtain competitive offers for their services from existing companies, resulting in lower wages and degraded working conditions. These provisions may also raise barriers to entry for new companies.” The assertion, however, that such clauses may violate Section 1 of the Sherman Act or Section 5 of the FTC Act’s bar on unfair methods of competition, seems dubious, to say the least. Unless there is coordination among companies, these are essentially unilateral contracting practices that may have robust efficiency explanations. Making out these practices to be federal antitrust violations is bad law and bad policy; they are, in any event, subject to a wide variety of state laws.

Even more problematic is the FTC’s claim that a variety of standard (typically efficiency-seeking) contract limitations, such as nondisclosure agreements and liquidated damages clauses, “may be excessive or overbroad” and subject to FTC scrutiny. This preposterous assertion would make the FTC into a second-guesser of common labor contracts (a federal labor-contract regulator, if you will), a role for which it lacks authority and is entirely unsuited. Turning the FTC into a federal labor-contract regulator would impose unjustifiable uncertainty costs on business and chill a host of efficient arrangements. It is hard to take such a claim of power seriously, given its lack of any credible statutory basis.

The final section of the gig statement dealing with FTC enforcement (“Policing Unfair Methods of Competition That Harm Gig Workers”) is unobjectionable, but not particularly informative. It essentially states that the FTC’s black letter legal authority over anticompetitive conduct also extends to gig companies: the FTC has the authority to investigate and prosecute anticompetitive mergers; agreements among competitors to fix terms of employment; no-poach agreements; and acts of monopolization and attempted monopolization. (Tell us something we did not know!)

The fact that gig-company workers may be harmed by such arrangements is noted. The mere page and a half devoted to this legal summary, however, provides little practical guidance for gig companies as to how to avoid running afoul of the law. Antitrust policy statements may be excused if they provided less detailed guidance than antitrust guidelines, but it would be helpful if they did something more than provide a capsule summary of general American antitrust principles. The gig statement does not pass this simple test.

The gig statement closes with a few glittering generalities. Cooperation with other agencies is highlighted (for example, an information-sharing agreement with the National Labor Relations Board is described). The FTC describes an “Equity Action Plan” calling for a focus on how gig-economy antitrust and consumer-protection abuses harm underserved communities and low-wage workers.

The FTC finishes with a request for input from the public and from gig workers about abusive and potentially illegal gig-sector conduct. No mention is made of the fact that the FTC must, of course, conform itself to the statutory limitations on its jurisdiction in the gig sector, as in all other areas of the economy.

Summing Up the Gig Statement

In sum, the critical flaw of the FTC’s gig statement is its focus on questions of labor law and policy (including the question of independent contractor as opposed to employee status) that are the proper purview of federal and state statutory schemes not administered by the Federal Trade Commission. (A secondary flaw is the statement’s unbalanced portrayal of the gig sector, which ignores its beneficial aspects.) If the FTC decides that gig-economy issues deserve particular enforcement emphasis, it should (and, indeed, must) direct its attention to anticompetitive actions and unfair or deceptive acts or practices that harm consumers.

On the antitrust side, that might include collusion among gig companies on the terms offered to workers or perhaps “mergers to monopoly” between gig companies offering a particular service. On the consumer-protection side, that might include making false or materially misleading statements to consumers about the terms under which they purchase gig-provided services. (It would be conceivable, of course, that some of those statements might be made, unwittingly or not, by gig independent contractors, at the behest of the gig companies.)

The FTC also might carry out gig-industry studies to identify particular prevalent competitive or consumer-protection harms. The FTC should not, however, seek to transform itself into a gig-labor-market enforcer and regulator, in defiance of its lack of statutory authority to play this role.

Conclusion

The FTC does, of course, have a legitimate role to play in challenging unfair methods of competition and unfair acts or practices that undermine consumer welfare wherever they arise, including in the gig economy. But it does a disservice by focusing merely on supposed negative aspects of the gig economy and conjuring up a gig-specific “parade of horribles” worthy of close commission scrutiny and enforcement action.

Many of the “horribles” cited may not even be “bads,” and many of them are, in any event, beyond the proper legal scope of FTC inquiry. There are other federal agencies (for example, the National Labor Relations Board) whose statutes may prove applicable to certain problems noted in the gig statement. In other cases, statutory changes may be required to address certain problems noted in the statement (assuming they actually are problems). The FTC, and its fellow enforcement agencies, should keep in mind, of course, that they are not Congress, and wishing for legal authority to deal with problems does not create it (something the federal judiciary fully understands).  

In short, the negative atmospherics that permeate the gig statement are unnecessary and counterproductive; if anything, they are likely to convince at least some judges that the FTC is not the dispassionate finder of fact and enforcer of law that it claims to be. In particular, the judiciary is unlikely to be impressed by the FTC’s apparent effort to insert itself into questions that lie far beyond its statutory mandate.

The FTC should withdraw the gig statement. If, however, it does not, it should revise the statement in a manner that is respectful of the limits on the commission’s legal authority, and that presents a more dispassionate analysis of gig-economy business conduct.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Things are heating up in the antitrust world. There is considerable pressure to pass the American Innovation and Choice Online Act (AICOA) before the congressional recess in August—a short legislative window before members of Congress shift their focus almost entirely to campaigning for the mid-term elections. While it would not be impossible to advance the bill after the August recess, it would be a steep uphill climb.

But whether it passes or not, some of the damage from AICOA may already be done. The bill has moved the antitrust dialogue that will harm innovation and consumers. In this post, I will first explain AICOA’s fundamental flaws. Next, I discuss the negative impact that the legislation is likely to have if passed, even if courts and agencies do not aggressively enforce its provisions. Finally, I show how AICOA has already provided an intellectual victory for the approach articulated in the European Union (EU)’s Digital Markets Act (DMA). It has built momentum for a dystopian regulatory framework to break up and break into U.S. superstar firms designated as “gatekeepers” at the expense of innovation and consumers.

The Unseen of AICOA

AICOA’s drafters argue that, once passed, it will deliver numerous economic benefits. Sen. Amy Klobuchar (D-Minn.)—the bill’s main sponsor—has stated that it will “ensure small businesses and entrepreneurs still have the opportunity to succeed in the digital marketplace. This bill will do just that while also providing consumers with the benefit of greater choice online.”

Section 3 of the bill would provide “business users” of the designated “covered platforms” with a wide range of entitlements. This includes preventing the covered platform from offering any services or products that a business user could provide (the so-called “self-preferencing” prohibition); allowing a business user access to the covered platform’s proprietary data; and an entitlement for business users to have “preferred placement” on a covered platform without having to use any of that platform’s services.

These entitlements would provide non-platform businesses what are effectively claims on the platform’s proprietary assets, notwithstanding the covered platform’s own investments to collect data, create services, and invent products—in short, the platform’s innovative efforts. As such, AICOA is redistributive legislation that creates the conditions for unfair competition in the name of “fair” and “open” competition. It treats the behavior of “covered platforms” differently than identical behavior by their competitors, without considering the deterrent effect such a framework will have on consumers and innovation. Thus, AICOA offers rent-seeking rivals a formidable avenue to reap considerable benefits at the expense of the innovators thanks to the weaponization of antitrust to subvert, not improve, competition.

In mandating that covered platforms make their data and proprietary assets freely available to “business users” and rivals, AICOA undermines the underpinning of free markets to pursue the misguided goal of “open markets.” The inevitable result will be the tragedy of the commons. Absent the covered platforms having the ability to benefit from their entrepreneurial endeavors, the law no longer encourages innovation. As Joseph Schumpeter seminally predicted: “perfect competition implies free entry into every industry … But perfectly free entry into a new field may make it impossible to enter it at all.”

To illustrate, if business users can freely access, say, a special status on the covered platforms’ ancillary services without having to use any of the covered platform’s services (as required under Section 3(a)(5)), then platforms are disincentivized from inventing zero-priced services, since they cannot cross-monetize these services with existing services. Similarly, if, under Section 3(a)(1) of the bill, business users can stop covered platforms from pre-installing or preferencing an app whenever they happen to offer a similar app, then covered platforms will be discouraged from investing in or creating new apps. Thus, the bill would generate a considerable deterrent effect for covered platforms to invest, invent, and innovate.

AICOA’s most detrimental consequences may not be immediately apparent; they could instead manifest in larger and broader downstream impacts that will be difficult to undo. As the 19th century French economist Frederic Bastiat wrote: “a law gives birth not only to an effect but to a series of effects. Of these effects, the first only is immediate; it manifests itself simultaneously with its cause—it is seen. The others unfold in succession—they are not seen it is well for, if they are foreseen … it follows that the bad economist pursues a small present good, which will be followed by a great evil to come, while the true economist pursues a great good to come,—at the risk of a small present evil.”

To paraphrase Bastiat, AICOA offers ill-intentioned rivals a “small present good”–i.e., unconditional access to the platforms’ proprietary assets–while society suffers the loss of a greater good–i.e., incentives to innovate and welfare gains to consumers. The logic is akin to those who advocate the abolition of intellectual-property rights: The immediate (and seen) gain is obvious, concerning the dissemination of innovation and a reduction of the price of innovation, while the subsequent (and unseen) evil remains opaque, as the destruction of the institutional premises for innovation will generate considerable long-term innovation costs.

Fundamentally, AICOA weakens the benefits of scale by pursuing vertical disintegration of the covered platforms to the benefit of short-term static competition. In the long term, however, the bill would dampen dynamic competition, ultimately harming consumer welfare and the capacity for innovation. The measure’s opportunity costs will prevent covered platforms’ innovations from benefiting other business users or consumers. They personify the “unseen,” as Bastiat put it: “[they are] always in the shadow, and who, personifying what is not seen, [are] an essential element of the problem. [They make] us understand how absurd it is to see a profit in destruction.”

The costs could well amount to hundreds of billions of dollars for the U.S. economy, even before accounting for the costs of deterred innovation. The unseen is costly, the seen is cheap.

A New Robinson-Patman Act?

Most antitrust laws are terse, vague, and old: The Sherman Act of 1890, the Federal Trade Commission Act, and the Clayton Act of 1914 deal largely in generalities, with considerable deference for courts to elaborate in a common-law tradition on the specificities of what “restraints of trade,” “monopolization,” or “unfair methods of competition” mean.

In 1936, Congress passed the Robinson-Patman Act, designed to protect competitors from the then-disruptive competition of large firms who—thanks to scale and practices such as price differentiation—upended traditional incumbents to the benefit of consumers. Passed after “Congress made no factual investigation of its own, and ignored evidence that conflicted with accepted rhetoric,” the law prohibits price differentials that would benefit buyers, and ultimately consumers, in the name of less vigorous competition from more efficient, more productive firms. Indeed, under the Robinson-Patman Act, manufacturers cannot give a bigger discount to a distributor who would pass these savings onto consumers, even if the distributor performs extra services relative to others.

Former President Gerald Ford declared in 1975 that the Robinson-Patman Act “is a leading example of [a law] which restrain[s] competition and den[ies] buyers’ substantial savings…It discourages both large and small firms from cutting prices, making it harder for them to expand into new markets and pass on to customers the cost-savings on large orders.” Despite this, calls to amend or repeal the Robinson-Patman Act—supported by, among others, competition scholars like Herbert Hovenkamp and Robert Bork—have failed.

In the 1983 Abbott decision, Justice Lewis Powell wrote: “The Robinson-Patman Act has been widely criticized, both for its effects and for the policies that it seeks to promote. Although Congress is aware of these criticisms, the Act has remained in effect for almost half a century.”

Nonetheless, the act’s enforcement dwindled, thanks to wise reactions from antitrust agencies and the courts. While it is seldom enforced today, the act continues to create considerable legal uncertainty, as it raises regulatory risks for companies who engage in behavior that may conflict with its provisions. Indeed, many of the same so-called “neo-Brandeisians” who support passage of AICOA also advocate reinvigorating Robinson-Patman. More specifically, the new FTC majority has expressed that it is eager to revitalize Robinson-Patman, even as the law protects less efficient competitors. In other words, the Robinson-Patman Act is a zombie law: dead, but still moving.

Even if the antitrust agencies and courts ultimately follow the same path of regulatory and judicial restraint on AICOA that they have on Robinson-Patman, the legal uncertainty its existence will engender will act as a powerful deterrent on disruptive competition that dynamically benefits consumers and innovation. In short, like the Robinson-Patman Act, antitrust agencies and courts will either enforce AICOA–thus, generating the law’s adverse effects on consumers and innovation–or they will refrain from enforcing AICOA–but then, the legal uncertainty shall lead to unseen, harmful effects on innovation and consumers.

For instance, the bill’s prohibition on “self-preferencing” in Section 3(a)(1) will prevent covered platforms from offering consumers new products and services that happen to compete with incumbents’ products and services. Self-preferencing often is a pro-competitive, pro-efficiency practice that companies widely adopt—a reality that AICOA seems to ignore.

Would AICOA prevent, e.g., Apple from offering a bundled subscription to Apple One, which includes Apple Music, so that the company can effectively compete with incumbents like Spotify? As with Robinson-Patman, antitrust agencies and courts will have to choose whether to enforce a productivity-decreasing law, or to ignore congressional intent but, in the process, generate significant legal uncertainties.

Judge Bork once wrote that Robinson-Patman was “antitrust’s least glorious hour” because, rather than improving competition and innovation, it reduced competition from firms who happen to be more productive, innovative, and efficient than their rivals. The law infamously protected inefficient competitors rather than competition. But from the perspective of legislative history perspective, AICOA may be antitrust’s new “least glorious hour.” If adopted, it will adversely affect innovation and consumers, as opportunistic rivals will be able to prevent cost-saving practices by the covered platforms.

As with Robinson-Patman, calls to amend or repeal AICOA may follow its passage. But Robinson-Patman Act illustrates the path dependency of bad antitrust laws. However costly and damaging, AICOA would likely stay in place, with regular calls for either stronger or weaker enforcement, depending on whether the momentum shifts from populist antitrust or antitrust more consistent with dynamic competition.

Victory of the Brussels Effect

The future of AICOA does not bode well for markets, either from a historical perspective or from a comparative-law perspective. The EU’s DMA similarly targets a few large tech platforms but it is broader, harsher, and swifter. In the competition between these two examples of self-inflicted techlash, AICOA will pale in comparison with the DMA. Covered platforms will be forced to align with the DMA’s obligations and prohibitions.

Consequently, AICOA is a victory of the DMA and of the Brussels effect in general. AICOA effectively crowns the DMA as the all-encompassing regulatory assault on digital gatekeepers. While members of Congress have introduced numerous antitrust bills aimed at targeting gatekeepers, the DMA is the one-stop-shop regulation that encompasses multiple antitrust bills and imposes broader prohibitions and stronger obligations on gatekeepers. In other words, the DMA outcompetes AICOA.

Commentators seldom lament the extraterritorial impact of European regulations. Regarding regulating digital gatekeepers, U.S. officials should have pushed back against the innovation-stifling, welfare-decreasing effects of the DMA on U.S. tech companies, in particular, and on U.S. technological innovation, in general. To be fair, a few U.S. officials, such as Commerce Secretary Gina Raimundo, did voice opposition to the DMA. Indeed, well-aware of the DMA’s protectionist intent and its potential to break up and break into tech platforms, Raimundo expressed concerns that antitrust should not be about protecting competitors and deterring innovation but rather about protecting the process of competition, however disruptive may be.

The influential neo-Brandeisians and radical antitrust reformers, however, lashed out at Raimundo and effectively shamed the Biden administration into embracing the DMA (and its sister regulation, AICOA). Brussels did not have to exert its regulatory overreach; the U.S. administration happily imports and emulates European overregulation. There is no better way for European officials to see their dreams come true: a techlash against U.S. digital platforms that enjoys the support of local officials.

In that regard, AICOA has already played a significant role in shaping the intellectual mood in Washington and in altering the course of U.S. antitrust. Members of Congress designed AICOA along the lines pioneered by the DMA. Sen. Klobuchar has argued that America should emulate European competition policy regarding tech platforms. Lina Khan, now chair of the FTC, co-authored the U.S. House Antitrust Subcommittee report, which recommended adopting the European concept of “abuse of dominant position” in U.S. antitrust. In her current position, Khan now praises the DMA. Tim Wu, competition counsel for the White House, has praised European competition policy and officials. Indeed, the neo-Brandeisians’ have not only praised the European Commission’s fines against U.S. tech platforms (despite early criticisms from former President Barack Obama) but have more dramatically called for the United States to imitate the European regulatory framework.

In this regulatory race to inefficiency, the standard is set in Brussels with the blessings of U.S. officials. Not even the precedent set by the EU’s General Data Protection Regulation (GDPR) fully captures the effects the DMA will have. Privacy laws passed by U.S. states’ privacy have mostly reacted to the reality of the GDPR. With AICOA, Congress is proactively anticipating, emulating, and welcoming the DMA before it has even been adopted. The intellectual and policy shift is historical, and so is the policy error.

AICOA and the Boulevard of Broken Dreams

AICOA is a failure similar to the Robinson-Patman Act and a victory for the Brussels effect and the DMA. Consumers will be the collateral damages, and the unseen effects on innovation will take years before they materialize. Calls for amendments and repeals of AICOA are likely to fail, so that the inevitable costs will forever bear upon consumers and innovation dynamics.

AICOA illustrates the neo-Brandeisian opposition to large innovative companies. Joseph Schumpeter warned against such hostility and its effect on disincentivizing entrepreneurs to innovate when he wrote:

Faced by the increasing hostility of the environment and by the legislative, administrative, and judicial practice born of that hostility, entrepreneurs and capitalists—in fact the whole stratum that accepts the bourgeois scheme of life—will eventually cease to function. Their standard aims are rapidly becoming unattainable, their efforts futile.

President William Howard Taft once said, “the world is not going to be saved by legislation.” AICOA will not save antitrust, nor will consumers. To paraphrase Schumpeter, the bill’s drafters “walked into our future as we walked into the war, blindfolded.” AICOA’s intentions to deliver greater competition, a fairer marketplace, greater consumer choice, and more consumer benefits will ultimately scatter across the boulevard of broken dreams.

The Baron de Montesquieu once wrote that legislators should only change laws with a “trembling hand”:

It is sometimes necessary to change certain laws. But the case is rare, and when it happens, they should be touched only with a trembling hand: such solemnities should be observed, and such precautions are taken that the people will naturally conclude that the laws are indeed sacred since it takes so many formalities to abrogate them.

AICOA’s drafters had a clumsy hand, coupled with what Friedrich Hayek would call “a pretense of knowledge.” They were certain to do social good and incapable of thinking of doing social harm. The future will remember AICOA as the new antitrust’s least glorious hour, where consumers and innovation were sacrificed on the altar of a revitalized populist view of antitrust.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Early Morning

I wake up grudgingly to the loud ring of my phone’s preset alarm sound (I swear I gave third-party alarms a fair shot). I slide my feet into the bedroom slippers and mechanically chaperone my body to the coffee machine in the living room.

“Great,” I think to myself, “Out of capsules, again.” Still in my bathrobe, I make a grumpy face and post an interoperable story on social media. “Don’t even talk to me before I’ve had my morning coffee! #HateMondays.”

I flick my thumb and get a warm, fuzzy feeling of satisfaction as I consent to a series of privacy-related pop-ups on the official incumbent’s online marketplace website (I place immense importance on my privacy) before getting ready to sit through the usual fairness presentations.

I reach for a chair, grab a notepad and crack my neck sideways as I try to focus my (still) groggy brain on the kaleidoscope of thumbnails before me. “Time to do my part,” I sigh. My eyes—trained by years of practice—dart from left to right and from right to left, carefully scrutinizing each coffee capsule on offer for an equal number of seconds (ever since the self-preferencing ban, all available products within a search category are displayed simultaneously on the screen to avoid any explicit or tacit bias that could be interpreted as giving the online marketplace incumbent’s own products an unfair advantage over competitors).

After 13 brands and at least as many flavors, I select the platforms own brand, “Basic” (it matches my coffee machine and I’ve found through trial and error that they’re the least prone to malfunctioning), and then answer a series of questions to make sure I have actually given competitors’ products fair consideration. Platforms—including the online marketplace incumbent—use sneaky and illegal ways to leverage the attention market and give a leg up to their own products, such as offering lower prices or better delivery conditions. But with enough practice you learn to see through it. Not on my watch!

Exhausted but pleased with myself, I put the notepad down and my feet up on the coffee table. Victory.

Noon

I curse as I stub my toe on the office chair. Still with a pen in my right hand, ink dripping, I whip out my phone and pick Whatsapp to answer (I’ve never felt the need to use any of the other, newer apps—since everything is interoperable now). “No, of course I didn’t forget to do the groceries,” I tell my girlfriend with a tinge of deliberate frustration. But, of course, she knows that I know that she knows that I did.

I grab my notepad and almost fall over as I try to slide into my jeans and produce a grocery itinerary (like a grocery list, but longer) at the same time. “Trader Pete’s for fruits and vegetables, Gracey’s for canned goods, HTS for HTS frozen pizza,” I scribble, nerves tense.

(Not every company has gone the way of the online marketplace incumbent and some have decided they would be better off if they just sold their own products. After all, you can’t be fined for self-preferencing if you’re only selling your own stuff. Of course, the strategy is only viable in those industries in which vertical integration hasn’t been banned).

I finish getting dressed and dash down the stairs. I instinctively glance at my phone before getting in the car and immediately regret it, as I dismiss a bunch of notifications about malware infections. “Another app store that I’m striking from the list,” I think to myself as I turn on the ignition.

Late Afternoon

My girlfriend has already ordered a soda as I sit down at the table. “Sorry I’m late,” I mumble. We talk about her day and I tell her about the capsules I ordered (she nods approvingly) before we finally decide to order. I wave to the waiter and ask about the specials. A lanky young man no older than 19 fumbles through his (empty) pad and lists a couple of dishes.

He blurts out “homemade” and immediately turns pale. I look at my girlfriend nervously, and she stares back blankly—dazed. “Do you mean to say that it was made here, in this restaurant?” I ask in disbelief, dizzy. He comes up with some sorry excuse but I’m having none of it. I make my way to the toilet—sickened—and pull out my phone with a shaky hand. I have the Federal Trade Commission on speed-dial. I call and select number one: self-preferencing. They immediately put me through with someone. Sweating, I explain that the Italian restaurant on the corner between the 5th and Madison avenues just recommended me a special dish made by them—and barely even mentioned any of the specialties offered by the kebab joint next door. I assure the voice at the other end of the line that I had nothing to do it, and that I have not ordered—let alone tasted—the dish.

I rush out of the bathroom with blinders on and pull my girlfriend by the elbow. Her coat is on and she’s clearly impatient to get the hell out of there. As I reach for my jacket by the exit, an older man with a moustache approaches us with a bowed head and literally begs us to take a bottle of wine (no doubt a bribe for my silence). He assures us that the wine is not “della casa” (made by the restaurant), and that it’s, in fact, a French wine made by a competitor. I’m not having any of it: I bid him good day and slam the door behind us.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

May 2007, Palo Alto

The California sun shone warmly on Eric Schmidt’s face as he stepped out of his car and made his way to have dinner at Madera, a chic Palo Alto restaurant.

Dining out was a welcome distraction from the endless succession of strategy meetings with the nitpickers of the law department, which had been Schmidt’s bread and butter for the last few months. The lawyers seemed to take issue with any new project that Google’s engineers came up with. “How would rivals compete with our maps?”; “Our placement should be no less favorable than rivals’’; etc. The objections were endless. 

This is not how things were supposed to be. When Schmidt became Google’s chief executive officer in 2001, his mission was to take the company public and grow the firm into markets other than search. But then something unexpected happened. After campaigning on an anti-monopoly platform, a freshman senator from Minnesota managed to get her anti-discrimination bill through Congress in just her first few months in office. All companies with a market cap of more than $150 billion were now prohibited from favoring their own products. Google had recently crossed that Rubicon, putting a stop to years of carefree expansion into new markets.

But today was different. The waiter led Schmidt to his table overlooking Silicon Valley. His acquaintance was already seated. 

With his tall and slender figure, Andy Rubin had garnered quite a reputation among Silicon Valley’s elite. After engineering stints at Apple and Motorola, developing various handheld devices, Rubin had set up his own shop. The idea was bold: develop the first open mobile platform—based on Linux, nonetheless. Rubin had pitched the project to Google in 2005 but given the regulatory uncertainty over the future of antitrust—the same wave of populist sentiment that would carry Klobuchar to office one year later—Schmidt and his team had passed.

“There’s no money in open source,” the company’s CFO ruled. Schmidt had initially objected, but with more pressing matters to deal with, he ultimately followed his CFO’s advice.

Schmidt and Rubin were exchanging pleasantries about Microsoft and Java when the meals arrived–sublime Wagyu short ribs and charred spring onions paired with a 1986 Chateau Margaux.

Rubin finally cut to the chase. “Our mobile operating system will rely on state-of-the-art touchscreen technology. Just like the device being developed by Apple. Buying Android today might be your only way to avoid paying monopoly prices to access Apple’s mobile users tomorrow.”

Schmidt knew this all too well: The future was mobile, and few companies were taking Apple’s upcoming iPhone seriously enough. Even better, as a firm, Android was treading water. Like many other startups, it had excellent software but no business model. And with the Klobuchar bill putting the brakes on startup investment—monetizing an ecosystem had become a delicate legal proposition, deterring established firms from acquiring startups–Schmidt was in the middle of a buyer’s market. “Android we could make us a force to reckon with” Schmidt thought to himself.

But he quickly shook that thought, remembering the words of his CFO: “There is no money in open source.” In an ideal world, Google would have used Android to promote its search engine—placing a search bar on Android users to draw users to its search engine—or maybe it could have tied a proprietary app store to the operating system, thus earning money from in-app purchases. But with the Klobuchar bill, these were no longer options. Not without endless haggling with Google’s planning committee of lawyers.

And they would have a point, of course. Google risked heavy fines and court-issued injunctions that would stop the project in its tracks. Such risks were not to be taken lightly. Schmidt needed a plan to make the Android platform profitable while accommodating Google’s rivals, but he had none.

The desserts were served, Schmidt steered the conversation to other topics, and the sun slowly set over Sand Hill Road.

Present Day, Cupertino

Apple continues to dominate the smartphone industry with little signs of significant competition on the horizon. While there are continuing rumors that Google, Facebook, or even TikTok might enter the market, these have so far failed to transpire.

Google’s failed partnership with Samsung, back in 2012, still looms large over the industry. After lengthy talks to create an open mobile platform failed to materialize, Google ultimately entered into an agreement with the longstanding mobile manufacturer. Unfortunately, the deal was mired by antitrust issues and clashing visions—Samsung was believed to favor a closed ecosystem, rather than the open platform envisioned by Google.

The sense that Apple is running away with the market is only reinforced by recent developments. Last week, Tim Cook unveiled the company’s new iPhone 11—the first ever mobile device to come with three cameras. With an eye-watering price tag of $1,199 for the top-of-the-line Pro model, it certainly is not cheap. In his presentation, Cook assured consumers Apple had solved the security issues that have been an important bugbear for the iPhone and its ecosystem of competing app stores.

Analysts expect the new range of devices will help Apple cement the iPhone’s 50% market share. This is especially likely given the important challenges that Apple’s main rivals continue to face.

The Windows Phone’s reputation for buggy software continues to undermine its competitive position, despite its comparatively low price point. Andy Rubin, the head of the Windows Phone, was reassuring in a press interview, but there is little tangible evidence he will manage to successfully rescue the flailing ship. Meanwhile, Huawei has come under increased scrutiny for the threats it may pose to U.S. national security. The Chinese manufacturer may face a U.S. sales ban, unless the company’s smartphone branch is sold to a U.S. buyer. Oracle is said to be a likely candidate.

The sorry state of mobile competition has become an increasingly prominent policy issue. President Klobuchar took to Twitter and called on mobile-device companies to refrain from acting as monopolists, intimating elsewhere that failure to do so might warrant tougher regulation than her anti-discrimination bill:

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.