In the world of video games, the process by which players train themselves or their characters in order to overcome a difficult “boss battle” is called “leveling up.” I find that the phrase also serves as a useful metaphor in the context of corporate mergers. Here, “leveling up” can be thought of as acquiring another firm in order to enter or reinforce one’s presence in an adjacent market where a larger and more successful incumbent is already active.

In video-game terminology, that incumbent would be the “boss.” Acquiring firms choose to level up when they recognize that building internal capacity to compete with the “boss” is too slow, too expensive, or is simply infeasible. An acquisition thus becomes the only way “to beat the boss” (or, at least, to maximize the odds of doing so).

Alas, this behavior is often mischaracterized as a “killer acquisition” or “reverse killer acquisition.” What separates leveling up from killer acquisitions is that the former serve to turn the merged entity into a more powerful competitor, while the latter attempt to weaken competition. In the case of “reverse killer acquisitions,” the assumption is that the acquiring firm would have entered the adjacent market regardless absent the merger, leaving even more firms competing in that market.

In other words, the distinction ultimately boils down to a simple (though hard to answer) question: could both the acquiring and target firms have effectively competed with the “boss” without a merger?

Because they are ubiquitous in the tech sector, these mergers—sometimes also referred to as acquisitions of nascent competitors—have drawn tremendous attention from antitrust authorities and policymakers. All too often, policymakers fail to adequately consider the realistic counterfactual to a merger and mistake leveling up for a killer acquisition. The most recent high-profile example is Meta’s acquisition of the virtual-reality fitness app Within. But in what may be a hopeful sign of a turning of the tide, a federal court appears set to clear that deal over objections from the Federal Trade Commission (FTC).

Some Recent ‘Boss Battles’

The canonical example of leveling up in tech markets is likely Google’s acquisition of Android back in 2005. While Apple had not yet launched the iPhone, it was already clear by 2005 that mobile would become an important way to access the internet (including Google’s search services). Rumors were swirling that Apple, following its tremendously successful iPod, had started developing a phone, and Microsoft had been working on Windows Mobile for a long time.

In short, there was a serious risk that Google would be reliant on a single mobile gatekeeper (i.e., Apple) if it did not move quickly into mobile. Purchasing Android was seen as the best way to do so. (Indeed, averting an analogous sort of threat appears to be driving Meta’s move into virtual reality today.)

The natural next question is whether Google or Android could have succeeded in the mobile market absent the merger. My guess is that the answer is no. In 2005, Google did not produce any consumer hardware. Quickly and successfully making the leap would have been daunting. As for Android:

Google had significant advantages that helped it to make demands from carriers and OEMs that Android would not have been able to make. In other words, Google was uniquely situated to solve the collective action problem stemming from OEMs’ desire to modify Android according to their own idiosyncratic preferences. It used the appeal of its app bundle as leverage to get OEMs and carriers to commit to support Android devices for longer with OS updates. The popularity of its apps meant that OEMs and carriers would have great difficulty in going it alone without them, and so had to engage in some contractual arrangements with Google to sell Android phones that customers wanted. Google was better resourced than Android likely would have been and may have been able to hold out for better terms with a more recognizable and desirable brand name than a hypothetical Google-less Android. In short, though it is of course possible that Android could have succeeded despite the deal having been blocked, it is also plausible that Android became so successful only because of its combination with Google. (citations omitted)

In short, everything suggests that Google’s purchase of Android was a good example of leveling up. Note that much the same could be said about the company’s decision to purchase Fitbit in order to compete against Apple and its Apple Watch (which quickly dominated the market after its launch in 2015).

A more recent example of leveling up is Microsoft’s planned acquisition of Activision Blizzard. In this case, the merger appears to be about improving Microsoft’s competitive position in the platform market for game consoles, rather than in the adjacent market for games.

At the time of writing, Microsoft is staring down the barrel of a gun: Sony is on the cusp of becoming the runaway winner of yet another console generation. Microsoft’s executives appear to have concluded that this is partly due to a lack of exclusive titles on the Xbox platform. Hence, they are seeking to purchase Activision Blizzard, one of the most successful game studios, known among other things for its acclaimed Call of Duty series.

Again, the question is whether Microsoft could challenge Sony by improving its internal game-publishing branch (known as Xbox Game Studios) or whether it needs to acquire a whole new division. This is obviously a hard question to answer, but a cursory glance at the titles shipped by Microsoft’s publishing studio suggest that the issues it faces could not simply be resolved by throwing more money at its existing capacities. Indeed, Microsoft Game Studios seems to be plagued by organizational failings that might only be solved by creating more competition within the Microsoft company. As one gaming journalist summarized:

The current predicament of these titles goes beyond the amount of money invested or the buzzwords used to market them – it’s about Microsoft’s plan to effectively manage its studios. Encouraging independence isn’t an excuse for such a blatantly hands-off approach which allows titles to fester for years in development hell, with some fostering mistreatment to occur. On the surface, it’s just baffling how a company that’s been ranked as one of the top 10 most reputable companies eight times in 11 years (as per RepTrak) could have such problems with its gaming division.

The upshot is that Microsoft appears to have recognized that its own game-development branch is failing, and that acquiring a well-functioning rival is the only way to rapidly compete with Sony. There is thus a strong case to be made that competition authorities and courts should approach the merger with caution, as it has at least the potential to significantly increase competition in the game-console industry.

Finally, leveling up is sometimes a way for smaller firms to try and move faster than incumbents into a burgeoning and promising segment. The best example of this is arguably Meta’s effort to acquire Within, a developer of VR fitness apps. Rather than being an attempt to thwart competition from a competitor in the VR app market, the goal of the merger appears to be to compete with the likes of Google, Apple, and Sony at the platform level. As Mark Zuckerberg wrote back in 2015, when Meta’s VR/AR strategy was still in its infancy:

Our vision is that VR/AR will be the next major computing platform after mobile in about 10 years… The strategic goal is clearest. We are vulnerable on mobile to Google and Apple because they make major mobile platforms. We would like a stronger strategic position in the next wave of computing….

Over the next few years, we’re going to need to make major new investments in apps, platform services, development / graphics and AR. Some of these will be acquisitions and some can be built in house. If we try to build them all in house from scratch, then we risk that several will take too long or fail and put our overall strategy at serious risk. To derisk this, we should acquire some of these pieces from leading companies.

In short, many of the tech mergers that critics portray as killer acquisitions are just as likely to be attempts by firms to compete head-on with incumbents. This “leveling up” is precisely the sort of beneficial outcome that antitrust laws were designed to promote.

Building Products Is Hard

Critics are often quick to apply the “killer acquisition” label to any merger where a large platform is seeking to enter or reinforce its presence in an adjacent market. The preceding paragraphs demonstrate that it’s not that simple, as these mergers often enable firms to improve their competitive position in the adjacent market. For obvious reasons, antitrust authorities and policymakers should be careful not to thwart this competition.

The harder part is how to separate the wheat from the chaff. While I don’t have a definitive answer, an easy first step would be for authorities to more seriously consider the supply side of the equation.

Building a new product is incredibly hard, even for the most successful tech firms. Microsoft famously failed with its Zune music player and Windows Phone. The Google+ social network never gained any traction. Meta’s foray into the cryptocurrency industry was a sobering experience. Amazon’s Fire Phone bombed. Even Apple, which usually epitomizes Silicon Valley firms’ ability to enter new markets, has had its share of dramatic failures: Apple Maps, its Ping social network, and the first Home Pod, to name a few.

To put it differently, policymakers should not assume that internal growth is always a realistic alternative to a merger. Instead, they should carefully examine whether such a strategy is timely, cost-effective, and likely to succeed.

This is obviously a daunting task. Firms will struggle to dispositively show that they need to acquire the target firm in order to effectively compete against an incumbent. The question essentially hinges on the quality of the firm’s existing management, engineers, and capabilities. All of these are difficult—perhaps even impossible—to measure. At the very least, policymakers can improve the odds of reaching a correct decision by approaching these mergers with an open mind.

Under Chair Lina Khan’s tenure, the FTC has opted for the opposite approach and taken a decidedly hostile view of tech acquisitions. The commission sued to block both Meta’s purchase of Within and Microsoft’s acquisition of Activision Blizzard. Likewise, several economists—notably Tommasso Valletti—have called for policymakers to reverse the burden of proof in merger proceedings, and opined that all mergers should be viewed with suspicion because, absent efficiencies, they always reduce competition.

Unfortunately, this skeptical approach is something of a self-fulfilling prophecy: when authorities view mergers with suspicion, they are likely to be dismissive of the benefits discussed above. Mergers will be blocked and entry into adjacent markets will occur via internal growth. 

Large tech companies’ many failed attempts to enter adjacent markets via internal growth suggest that such an outcome would ultimately harm the digital economy. Too many “boss battles” will needlessly be lost, depriving consumers of precious competition and destroying startup companies’ exit strategies.

Output of the LG Research AI to the prompt: “a system of copyright for artificial intelligence”

Not only have digital-image generators like Stable Diffusion, DALL-E, and Midjourney—which make use of deep-learning models and other artificial-intelligence (AI) systems—created some incredible (and sometimes creepy – see above) visual art, but they’ve engendered a good deal of controversy, as well. Human artists have banded together as part of a fledgling anti-AI campaign; lawsuits have been filed; and policy experts have been trying to think through how these machine-learning systems interact with various facets of the law.

Debates about the future of AI have particular salience for intellectual-property rights. Copyright is notoriously difficult to protect online, and these expert systems add an additional wrinkle: it can at least argued that their outputs can be unique creations. There are also, of course, moral and philosophical objections to those arguments, with many grounded in the supposition that only a human (or something with a brain, like humans) can be creative.

Leaving aside for the moment a potentially pitched battle over the definition of “creation,” we should be able to find consensus that at least some of these systems produce unique outputs and are not merely cutting and pasting other pieces of visual imagery into a new whole. That is, at some level, the machines are engaging in a rudimentary sort of “learning” about how humans arrange colors and lines when generating images of certain subjects. The machines then reconstruct this process and produce a new set of lines and colors that conform to the patterns they found in the human art.

But that isn’t the end of the story. Even if some of these systems’ outputs are unique and noninfringing, the way the machines learn—by ingesting existing artwork—can raise a number of thorny issues. Indeed, these systems are arguably infringing copyright during the learning phase, and such use may not survive a fair-use analysis.

We are still in the early days of thinking through how this new technology maps onto the law. Answers will inevitably come, but for now, there are some very interesting questions about the intellectual-property implications of AI-generated art, which I consider below.

The Points of Collision Between Intellectual Property Law and AI-Generated Art

AI-generated art is not a single thing. It is, rather, a collection of differing processes, each with different implications for the law. For the purposes of this post, I am going to deal with image-generation systems that use “generated adversarial networks” (GANs) and diffusion models. The various implementations of each will differ in some respects, but from what I understand, the ways that these techniques can be used generate all sorts of media are sufficiently similar that we can begin to sketch out some of their legal implications. 

A (very) brief technical description

This is a very high-level overview of how these systems work; for a more detailed (but very readable) description, see here.

A GAN is a type of machine-learning model that consists of two parts: a generator and a discriminator. The generator is trained to create new images that look like they come from a particular dataset, while the discriminator is trained to distinguish the generated images from real images in the dataset. The two parts are trained together in an adversarial manner, with the generator trying to produce images that can fool the discriminator and the discriminator trying to correctly identify the generated images.

A diffusion model, by contrast, analyzes the distribution of information in an image, as noise is progressively added to it. This kind of algorithm analyzes characteristics of sample images—like the distribution of colors or lines—in order to “understand” what counts as an accurate representation of a subject (i.e., what makes a picture of a cat look like a cat and not like a dog).

For example, in the generation phase, systems like Stable Diffusion start with randomly generated noise, and work backward in “denoising” steps to essentially “see” shapes:

The sampled noise is predicted so that if we subtract it from the image, we get an image that’s closer to the images the model was trained on (not the exact images themselves, but the distribution – the world of pixel arrangements where the sky is usually blue and above the ground, people have two eyes, cats look a certain way – pointy ears and clearly unimpressed).

It is relevant here that, once networks using these techniques are trained, they do not need to rely on saved copies of the training images in order to generate new images. Of course, it’s possible that some implementations might be designed in a way that does save copies of those images, but for the purposes of this post, I will assume we are talking about systems that save known works only during the training phase. The models that are produced during training are, in essence, instructions to a different piece of software about how to start with a text prompt from a user—a palette of pure noise—and progressively “discover” signal in that image until some new image emerges.

Input-stage use of intellectual property

The creators of OpenAI, one of the most popular AI tools, are not shy about their use of protected works in the training phase of AI algorithms. In comments to the U.S. Patent and Trademark Office (PTO), they note that:

…[m]odern AI systems require large amounts of data. For certain tasks, that data is derived from existing publicly accessible “corpora”… of data that include copyrighted works. By analyzing large corpora (which necessarily involves first making copies of the data to be analyzed), AI systems can learn patterns inherent in human-generated data and then use those patterns to synthesize similar data which yield increasingly compelling novel media in modalities as diverse as text, image, and audio. (emphasis added).

Thus, at the training stage, the most popular forms of machine-learning systems require making copies of existing works. And where the material being used is either not in the public domain or is not licensed, an infringement occurs (as Getty Images notes in a suit against Stability AI that it recently filed). Thus, some affirmative defense is needed to excuse the infringement.

Toward this end, OpenAI believes that its algorithmic training should qualify as a fair use. Other major services that use these AI techniques to “learn” from existing media would likely make similar arguments. But, at least in the way that OpenAI has framed the fair-use analysis (that these uses are sufficiently “transformative”), it’s not clear that they should qualify.

The purpose and character of the use

In brief, fair use—found in 17 USC § 107—provides for an affirmative defense against infringement when the use is  “for purposes such as criticism, comment, news reporting, teaching…, scholarship, or research.” When weighing a fair-use defense, a court must balance a number of factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.

OpenAI’s fair-use claim is rooted in the first factor: the nature and character of the use. I should note, then, that what follows is solely a consideration of Factor 1, with special attention paid to whether these uses are “transformative.” But it is important to stipulate fair-use analysis is a multi-factor test and that, even within the first factor, it’s not mandatory that a use be “transformative.” It is entirely possible that a court balancing all of the factors could, indeed, find that OpenAI is engaged in fair use, even if it does not agree that it is “transformative.”

Whether the use of copyrighted works to train an AI is “transformative” is certainly a novel question, but it is likely answered through an observation that the U.S. Supreme Court made in Campbell v. Acuff Rose Music:

[W]hat Sony said simply makes common sense: when a commercial use amounts to mere duplication of the entirety of an original, it clearly “supersede[s] the objects,”… of the original and serves as a market replacement for it, making it likely that cognizable market harm to the original will occur… But when, on the contrary, the second use is transformative, market substitution is at least less certain, and market harm may not be so readily inferred.

A key question, then, is whether training an AI on copyrighted works amounts to mere “duplication of the entirety of an original” or is sufficiently “transformative” to support a fair-use finding. Open AI, as noted above, believes its use is highly transformative. According to its comments:

Training of AI systems is clearly highly transformative. Works in training corpora were meant primarily for human consumption for their standalone entertainment value. The “object of the original creation,” in other words, is direct human consumption of the author’s ​expression.​ Intermediate copying of works in training AI systems is, by contrast, “non-expressive” the copying helps computer programs learn the patterns inherent in human-generated media. The aim of this process—creation of a useful generative AI system—is quite different than the original object of human consumption.  The output is different too: nobody looking to read a specific webpage contained in the corpus used to train an AI system can do so by studying the AI system or its outputs. The new purpose and expression are thus both highly transformative.

But the way that Open AI frames its system works against its interests in this argument. As noted above, and reinforced in the immediately preceding quote, an AI system like DALL-E or Stable Diffusion is actually made of at least two distinct pieces. The first is a piece of software that ingests existing works and creates a file that can serve as instructions to the second piece of software. The second piece of software then takes the output of the first part and can produce independent results. Thus, there is a clear discontinuity in the process, whereby the ultimate work created by the system is disconnected from the creative inputs used to train the software.

Therefore, contrary to what Open AI asserts, the protected works are indeed ingested into the first part of the system “for their standalone entertainment value.” That is to say, the software is learning what counts as “standalone entertainment value” and therefore, the works mustbe used in those terms.

Surely, a computer is not sitting on a couch and surfing for its own entertainment. But it is solely for the very “standalone entertainment value” that the first piece of software is being shown copyrighted material. By contrast, parody or “remixing”  uses incorporate the work into some secondary expression that transforms the input. The way these systems work is to learn what makes a piece entertaining and then to discard that piece altogether. Moreover, this use of art qua art most certainly interferes with the existing market insofar as this use is in lieu of reaching a licensing agreement with rightsholders.

The 2nd U.S. Circuit Court of Appeals dealt with an analogous case. In American Geophysical Union v. Texaco, the 2nd Circuit considered whether Texaco’s photocopying of scientific articles produced by the plaintiffs qualified for a fair-use defense. Texaco employed between 400 and 500 research scientists and, as part of supporting their work, maintained subscriptions to a number of scientific journals. It was common practice for Texaco’s scientists to photocopy entire articles and save them in a file.

The plaintiffs sued for copyright infringement. Texaco asserted that photocopying by its scientists for the purposes of furthering scientific research—that is to train the scientists on the content of the journal articles—should count as a fair use, at least in part because it was sufficiently “transformative.” The 2nd Circuit disagreed:

The “transformative use” concept is pertinent to a court’s investigation under the first factor because it assesses the value generated by the secondary use and the means by which such value is generated. To the extent that the secondary use involves merely an untransformed duplication, the value generated by the secondary use is little or nothing more than the value that inheres in the original. Rather than making some contribution of new intellectual value and thereby fostering the advancement of the arts and sciences, an untransformed copy is likely to be used simply for the same intrinsic purpose as the original, thereby providing limited justification for a finding of fair use… (emphasis added).

As in the case at hand, the 2nd Circuit observed that making full copies of the scientific articles was solely for the consumption of the material itself. A rejoinder, of course, is that training these AI systems surely advances scientific research and, thus, does foster the “advancement of the arts and sciences.” But in American Geophysical Union, where the secondary use was explicitly for the creation of new and different scientific outputs, the court still held that making copies of one scientific article in order to learn and produce new scientific innovations did not count as “transformative.”

What this case represents is that one cannot merely state that some social goal will be advanced in the future by permitting an exception to copyright protection today. As the 2nd Circuit put it:

…the dominant purpose of the use is a systematic institutional policy of multiplying the available number of copies of pertinent copyrighted articles by circulating the journals among employed scientists for them to make copies, thereby serving the same purpose for which additional subscriptions are normally sold, or… for which photocopying licenses may be obtained.

The secondary use itself must be transformative and different. Where an AI system ingests copyrighted works, that use is simply not transformative; it is using the works in their original sense in order to train a system to be able to make other original works. As in American Geophysical Union, the AI creators are completely free to seek licenses from rightsholders in order to train their systems.

Finally, there is a sense in which this machine learning might not infringe on copyrights at all. To my knowledge, the technology does not itself exist, but if it were possible for a machine to somehow “see” in the way that humans do—without using stored copies of copyrighted works—merely “learning” from those works, such as we can call it learning, probably would not violate copyright laws.

Do the outputs of these systems violate intellectual property laws?

The outputs of GANs and diffusion models may or may not violate IP laws, but there is nothing inherent in the processes described above to dictate that they must. As noted, the most common AI systems do not save copies of existing works, but merely “instructions” (more or less) on how to create new works that conform to patterns they found by examining existing work. If we assume that a system isn’t violating copyright at the input stage, it’s entirely possible that it can produce completely new pieces of art that have never before existed and do not violate copyright.

They can, however, be made to violate IP rights. For example, trademark violations appear to be one of the most popular uses of these AI systems by end users. To take but one example, a quick search of Google Images for “midjourney iron man” returns a slew of images that almost certainly violate trademarks for the character Iron Man. Similarly, these systems can be instructed to generate art that is not just “in the style” of a particular artist, but that very closely resembles existing pieces. In this sense, the system would be making a copy that theoretically infringes. 

There is a common bug in such systems that leads to outputs that are more likely to violate copyright in this way. Known as “overfitting,” the training leg of these AI systems can be presented with samples that contain too many instances of a particular image. This leads to a data set that contains too much information about the specific image, such that when the AI generates a new image, it is constrained to producing something very close to the original.

An argument can also be made that generating art “in the style of” a famous artist violates moral rights (in jurisdictions where such rights exist).

At least in the copyright space, cases like Sony are going to become crucial. Does the user side of these AI systems have substantial noninfringing uses? If so, the firms that host software for end users could avoid secondary-infringement liability, and the onus would fall on users to avoid violating copyright laws. At the same time, it seems plausible that legislatures could place some obligation on these providers to implement filters to mitigate infringement by end users.

Opportunities for New IP Commercialization with AI

There are a number of ways that AI systems may inexcusably infringe on intellectual-property rights. As a best practice, I would encourage the firms that operate these services to seek licenses from rightsholders. While this would surely be an expense, it also opens new opportunities for both sides to generate revenue.

For example, an AI firm could develop its own version of YouTube’s ContentID that allows creators to opt their work into training. For some well-known artists this could be negotiated with an upfront licensing fee. On the user-side, any artist who has opted in could then be selected as a “style” for the AI to emulate. When users generate an image, a royalty payment to the artist would be created. Creators would also have the option to remove their influence from the system if they so desired. 

Undoubtedly, there are other ways to monetize the relationship between creators and the use of their work in AI systems. Ultimately, the firms that run these systems will not be able to simply wish away IP laws. There are going to be opportunities for creators and AI firms to both succeed, and the law should help to generate that result.

In our previous post on Gonzalez v. Google LLC, which will come before the U.S. Supreme Court for oral arguments Feb. 21, Kristian Stout and I argued that, while the U.S. Justice Department (DOJ) got the general analysis right (looking to Roommates.com as the framework for exceptions to the general protections of Section 230), they got the application wrong (saying that algorithmic recommendations should be excepted from immunity).

Now, after reading Google’s brief, as well as the briefs of amici on their side, it is even more clear to me that:

  1. algorithmic recommendations are protected by Section 230 immunity; and
  2. creating an exception for such algorithms would severely damage the internet as we know it.

I address these points in reverse order below.

Google on the Death of the Internet Without Algorithms

The central point that Google makes throughout its brief is that a finding that Section 230’s immunity does not extend to the use of algorithmic recommendations would have potentially catastrophic implications for the internet economy. Google and amici for respondents emphasize the ubiquity of recommendation algorithms:

Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack. The result of these algorithms is unprecedented access to knowledge, from the lifesaving (“how to perform CPR”) to the mundane (“best pizza near me”). Google Search uses algorithms to recommend top search results. YouTube uses algorithms to share everything from cat videos to Heimlich-maneuver tutorials, algebra problem-solving guides, and opera performances. Services from Yelp to Etsy use algorithms to organize millions of user reviews and ratings, fueling global commerce. And individual users “like” and “share” content millions of times every day. – Brief for Respondent Google, LLC at 2.

The “recommendations” they challenge are implicit, based simply on the manner in which YouTube organizes and displays the multitude of third-party content on its site to help users identify content that is of likely interest to them. But it is impossible to operate an online service without “recommending” content in that sense, just as it is impossible to edit an anthology without “recommending” the story that comes first in the volume. Indeed, since the dawn of the internet, virtually every online service—from news, e-commerce, travel, weather, finance, politics, entertainment, cooking, and sports sites, to government, reference, and educational sites, along with search engines—has had to highlight certain content among the thousands or millions of articles, photographs, videos, reviews, or comments it hosts to help users identify what may be most relevant. Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable. As a result, exposing online services to liability for the “recommendations” inherent in those organizational choices would expose them to liability for third-party content virtually all the time. – Amicus Brief for Meta Platforms at 3-4.

In other words, if Section 230 were limited in the way that the plaintiffs (and the DOJ) seek, internet platforms’ ability to offer users useful information would be strongly attenuated, if not completely impaired. The resulting legal exposure would lead inexorably to far less of the kinds of algorithmic recommendations upon which the modern internet is built.

This is, in part, why we weren’t able to fully endorse the DOJ’s brief in our previous post. The DOJ’s brief simply goes too far. It would be unreasonable to establish as a categorical rule that use of the ubiquitous auto-discovery algorithms that power so much of the internet would strip a platform of Section 230 protection. The general rule advanced by the DOJ’s brief would have detrimental and far-ranging implications.

Amici on Publishing and Section 230(f)(4)

Google and the amici also make a strong case that algorithmic recommendations are inseparable from publishing. They have a strong textual hook in Section 230(f)(4), which explicitly protects “enabling tools that… filter, screen, allow, or disallow content; pick, choose, analyze or disallow content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

As the amicus brief from a group of internet-law scholars—including my International Center for Law & Economics colleagues Geoffrey Manne and Gus Hurwitz—put it:

Section 230’s text should decide this case. Section 230(c)(1) immunizes the user or provider of an “interactive computer service” from being “treated as the publisher or speaker” of information “provided by another information content provider.” And, as Section 230(f)’s definitions make clear, Congress understood the term “interactive computer service” to include services that “filter,” “screen,” “pick, choose, analyze,” “display, search, subset, organize,” or “reorganize” third-party content. Automated recommendations perform exactly those functions, and are therefore within the express scope of Section 230’s text. – Amicus Brief of Internet Law Scholars at 3-4.

In other words, Section 230 protects not just the conveyance of information, but how that information is displayed. Algorithmic recommendations are a subset of those display tools that allow users to find what they are looking for with ease. Section 230 can’t be reasonably read to exclude them.

Why This Isn’t Really (Just) a Roommates.com Case

This is where the DOJ’s amicus brief (and our previous analysis) misses the point. This is not strictly a Roomates.com case. The case actually turns on whether algorithmic recommendations are separable from publication of third-party content, rather than whether they are design choices akin to what was occurring in that case.

For instance, in our previous post, we argued that:

[T]he DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did.

While we thought the DOJ went too far in differentiating algorithmic recommendations from other uses of algorithms, we gave them too much credit in applying the Roomates.com analysis. Section 230 was meant to immunize filtering tools, so long as the information provided is from third parties. Algorithmic recommendations—like the type at issue with YouTube’s “Up Next” feature—are less like the conduct in Roommates.com and much more like a search engine.

The DOJ did, however, have a point regarding algorithmic tools in that they may—like any other tool a platform might use—be employed in a way that transforms the automated promotion into a direct endorsement or original publication. For instance, it’s possible to use algorithms to intentionally amplify certain kinds of content in such a way as to cultivate more of that content.

That’s, after all, what was at the heart of Roommates.com. The site was designed to elicit responses from users that violated the law. Algorithms can do that, but as we observed previously, and as the many amici in Gonzalez observe, there is nothing inherent to the operation of algorithms that match users with content that makes their use categorically incompatible with Section 230’s protections.

Conclusion

After looking at the textual and policy arguments forwarded by both sides in Gonzalez, it appears that Google and amici for respondents have the better of it. As several amici argued, to the extent there are good reasons to reform Section 230, Congress should take the lead. The Supreme Court shouldn’t take this case as an opportunity to significantly change the consensus of the appellate courts on the broad protections of Section 230 immunity.

At the Jan. 26 Policy in Transition forum—the Mercatus Center at George Mason University’s second annual antitrust forum—various former and current antitrust practitioners, scholars, judges, and agency officials held forth on the near-term prospects for the neo-Brandeisian experiment undertaken in recent years by both the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ). In conjunction with the forum, Mercatus also released a policy brief on 2022’s significant antitrust developments.

Below, I summarize some of the forum’s noteworthy takeaways, followed by concluding comments on the current state of the antitrust enterprise, as reflected in forum panelists’ remarks.

Takeaways

    1. The consumer welfare standard is neither a recent nor an arbitrary antitrust-enforcement construct, and it should not be abandoned in order to promote a more “enlightened” interventionist antitrust.

George Mason University’s Donald Boudreaux emphasized in his introductory remarks that the standard goes back to Adam Smith, who noted in “The Wealth of Nations” nearly 250 years ago that the appropriate end of production is the consumer’s benefit. Moreover, American Antitrust Institute President Diana Moss, a leading proponent of more aggressive antitrust enforcement, argued in standalone remarks against abandoning the consumer welfare standard, as it is sufficiently flexible to justify a more interventionist agenda.

    1. The purported economic justifications for a far more aggressive antitrust-enforcement policy on mergers remain unconvincing.

Moss’ presentation expressed skepticism about vertical-merger efficiencies and called for more aggressive challenges to such consolidations. But Boudreaux skewered those arguments in a recent four-point rebuttal at Café Hayek. As he explains, Moss’ call for more vertical-merger enforcement ignores the fact that “no one has stronger incentives than do the owners and managers of firms to detect and achieve possible improvements in operating efficiencies – and to avoid inefficiencies.”

Moss’ complaint about chronic underenforcement mistakes by overly cautious agencies also ignores the fact that there will always be mistakes, and there is no reason to believe “that antitrust bureaucrats and courts are in a position to better predict the future [regarding which efficiencies claims will be realized] than are firm owners and managers.” Moreover, Moss provided “no substantive demonstration or evidence that vertical mergers often lead to monopolization of markets – that is, to industry structures and practices that harm consumers. And so even if vertical mergers never generate efficiencies, there is no good argument to use antitrust to police such mergers.”

And finally, Boudreaux considers Moss’ complaint that a court refused to condemn the AT&T-Time Warner merger, arguing that this does not demonstrate that antitrust enforcement is deficient:

[A]s soon as the  . . . merger proved to be inefficient, the parties themselves undid it. This merger was undone by competitive market forces and not by antitrust! (Emphasis in the original.)

    1. The agencies, however, remain adamant in arguing that merger law has been badly unenforced. As such, the new leadership plans to charge ahead and be willing to challenge more mergers based on mere market structure, paying little heed to efficiency arguments or actual showings of likely future competitive harm.

In her afternoon remarks at the forum, Principal Deputy Assistant U.S. Attorney General for Antitrust Doha Mekki highlighted five major planks of Biden administration merger enforcement going forward.

  • Clayton Act Section 7 is an incipiency statute. Thus, “[w]hen a [mere] change in market structure suggests that a firm will have an incentive to reduce competition, that should be enough [to justify a challenge].”
  • “Once we see that a merger may lead to, or increase, a firm’s market power, only in very rare circumstances should we think that a firm will not exercise that power.”
  • A structural presumption “also helps businesses conform their conduct to the law with more confidence about how the agencies will view a proposed merger or conduct.”
  • Efficiencies defenses will be given short shrift, and perhaps ignored altogether. This is because “[t]he Clayton Act does not ask whether a merger creates a more or less efficient firm—it asks about the effect of the merger on competition. The Supreme Court has never recognized efficiencies as a defense to an otherwise illegal merger.”
  • Merger settlements have often failed to preserve competition, and they will be highly disfavored. Therefore, expect a lot more court challenges to mergers than in recent decades. In short, “[w]e must be willing to litigate. . . . [W]e need to acknowledge the possibility that sometimes a court might not agree with us—and yet go to court anyway.”

Mekki’s comments suggest to me that the soon-to-be-released new draft merger guidelines may emphasize structural market-share tests, generally reject efficiencies justifications, and eschew the economic subtleties found in the current guidelines.

    1. The agencies—and the FTC, in particular—have serious institutional problems that undermine their effectiveness, and risk a loss of credibility before the courts in the near future.

In his address to the forum, former FTC Chairman Bill Kovacic lamented the inefficient limitations on reasoned FTC deliberations imposed by the Sunshine Act, which chills informal communications among commissioners. He also pointed to our peculiarly unique global status of having two enforcers with duplicative antitrust authority, and lamented the lack of policy coherence, which reflects imperfect coordination between the agencies.

Perhaps most importantly, Kovacic raised the specter of the FTC losing credibility in a possible world where Humphrey’s Executor is overturned (see here) and the commission is granted little judicial deference. He suggested taking lessons on policy planning and formulation from foreign enforcers—the United Kingdom’s Competition and Markets Authority, in particular. He also decried agency officials’ decisions to belittle prior administrations’ enforcement efforts, seeing it as detracting from the international credibility of U.S. enforcement.

    1. The FTC is embarking on a novel interventionist path at odds with decades of enforcement policy.

In luncheon remarks, Commissioner Christine S. Wilson lamented the lack of collegiality and consultation within the FTC. She warned that far-reaching rulemakings and other new interventionist initiatives may yield a backlash that undermines the institution.

Following her presentation, a panel of FTC experts discussed several aspects of the commission’s “new interventionism.” According to one panelist, the FTC’s new Section 5 Policy Statement on Unfair Methods of Competition (which ties “unfairness” to arbitrary and subjective terms) “will not survive in” (presumably, will be given no judicial deference by) the courts. Another panelist bemoaned rule-of-law problems arising from FTC actions, called for consistency in FTC and DOJ enforcement policies, and warned that the new merger guidelines will represent a “paradigm shift” that generates more business uncertainty.

The panel expressed doubts about the legal prospects for a proposed FTC rule on noncompete agreements, and noted that constitutional challenges to the agency’s authority may engender additional difficulties for the commission.

    1. The DOJ is greatly expanding its willingness to litigate, and is taking actions that may undermine its credibility in court.

Assistant U.S. Attorney General for Antitrust Jonathan Kanter has signaled a disinclination to settle, as well as an eagerness to litigate large numbers of cases (toward that end, he has hired a huge number of litigators). One panelist noted that, given this posture from the DOJ, there is a risk that judges may come to believe that the department’s litigation decisions are not well-grounded in the law and the facts. The business community may also have a reduced willingness to “buy in” to DOJ guidance.

Panelists also expressed doubts about the wisdom of DOJ bringing more “criminal Sherman Act Section 2” cases. The Sherman Act is a criminal statute, but the “beyond a reasonable doubt” standard of criminal law and Due Process concerns may arise. Panelists also warned that, if new merger guidelines are ”unsound,” they may detract from the DOJ’s credibility in federal court.

    1. International antitrust developments have introduced costly new ex ante competition-regulation and enforcement-coordination problems.

As one panelist explained, the European Union’s implementation of the new Digital Markets Act (DMA) will harmfully undermine market forces. The DMA is a form of ex ante regulation—primarily applicable to large U.S. digital platforms—that will harmfully interject bureaucrats into network planning and design. The DMA will lead to inefficiencies, market fragmentation, and harm to consumers, and will inevitably have spillover effects outside Europe.

Even worse, the DMA will not displace the application of EU antitrust law, but merely add to its burdens. Regrettably, the DMA’s ex ante approach is being imitated by many other enforcement regimes, and the U.S. government tacitly supports it. The DMA has not been included in the U.S.-EU joint competition dialogue, which risks failure. Canada and the U.K. should also be added to the dialogue.

Other International Concerns

The international panelists also noted that there is an unfortunate lack of convergence on antitrust procedures. Furthermore, different jurisdictions manifest substantial inconsistencies in their approaches to multinational merger analysis, where better coordination is needed. There is a special problem in the areas of merger review and of criminal leniency for price fixers: when multiple jurisdictions need to “sign off” on an enforcement matter, the “most restrictive” jurisdiction has an effective veto.

Finally, former Assistant U.S. Attorney General for Antitrust James Rill—perhaps the most influential promoter of the adoption of sound antitrust laws worldwide—closed the international panel with a call for enhanced transnational cooperation. He highlighted the importance of global convergence on sound antitrust procedures, emphasizing due process. He also advocated bolstering International Competition Network (ICN) and OECD Competition Committee convergence initiatives, and explained that greater transparency in agency-enforcement actions is warranted. In that regard, Rill said, ICN nongovernmental advisers should be given a greater role.

Conclusion

Taken as a whole, the forum’s various presentations painted a rather gloomy picture of the short-term prospects for sound, empirically based, economics-centric antitrust enforcement.

In the United States, the enforcement agencies are committed to far more aggressive antitrust enforcement, particularly with respect to mergers. The agencies’ new approach downplays efficiencies and they will be quick to presume broad categories of business conduct are anticompetitive, relying far less closely on case-specific economic analysis.

The outlook is also bad overseas, as European Union enforcers are poised to implement new ex ante regulation of competition by large platforms as an addition to—not a substitute for—established burdensome antitrust enforcement. Most foreign jurisdictions appear to be following the European lead, and the U.S. agencies are doing nothing to discourage them. Indeed, they appear to fully support the European approach.

The consumer welfare standard, which until recently was the stated touchstone of American antitrust enforcement—and was given at least lip service in Europe—has more or less been set aside. The one saving grace in the United States is that the federal courts may put a halt to the agencies’ overweening ambitions, but that will take years. In the meantime, consumer welfare will suffer and welfare-enhancing business conduct will be disincentivized. The EU courts also may place a minor brake on European antitrust expansionism, but that is less certain.

Recall, however, that when evils flew out of Pandora’s box, hope remained. Let us hope, then, that the proverbial worm will turn, and that new leadership—inspired by hopeful and enlightened policy advocates—will restore principled antitrust grounded in the promotion of consumer welfare.

In a prior post, I made the important if wholly unoriginal point that the Federal Trade Commission’s (FTC) recent policy statement regarding unfair methods of competition (UMC)—perhaps a form of “soft law”—has neither legal force nor precedential value. Gus Hurwitz offers a more thorough discussion of the issue here

But policy statements may still have value as guidance documents for industry and the bar. They can also inform the courts, providing a framework for the commission’s approach to the specific facts and circumstances that underlie a controversy. That is, as the 12th century sage Maimonides endeavored in his own “Guide for the Perplexed,” they can elucidate rationales for particular principles and decisions of law. 

I also pointed out (also unoriginally) that the statement’s guidance value might be undermined by its own vagueness. Or as former FTC Commissioner and Acting Chairman Maureen Ohlhausen put it:

While ostensibly intended to provide such guidance, the new Policy Statement contains few specifics about the particular conduct that the Commission might deem to be unfair, and suggests that the FTC has broad discretion to challenge nearly any conduct with which it disagrees.

There’s so much going on at (or being announced by) my old agency that it’s hard to keep up. One recent development reaches back into FTC history—all the way to late 2021—to find an initiative at the boundary of soft and hard law: that is, the issuance to more than 700 U.S. firms of notices of penalty offenses about “fake reviews and other misleading endorsements.” 

A notice of penalty offenses is supposed to provide a sort of firm-specific guidance: a recipient is informed that certain sorts of conduct have been deemed to violate the FTC Act. It’s not a decision or even an allegation that the firm has engaged in such prohibited conduct. In that way, it’s like soft law. 

On the other hand, it’s not entirely anemic. In AMG Capital, the Supreme Court held that the FTC cannot obtain equitable monetary remedies for violations of the FTC Act in the first instance—at least, not under Section 13b of the FTC Act. But there are circumstances under which the FTC can get statutory penalties (up to just over $50,000 per violation, and a given course of conduct might entail many violations) for, e.g., violating a regulation that implements Section 5.

That serves as useful background to observe that, among the FTC’s recent advanced notices of proposed rulemakings (ANPRs) is one about regulating fake reviews. (Commissioner Christine S. Wilson’s dissent in the matter is here.) 

Here it should be noted that Section 5(m) of the FTC Act also permits monetary penalties if “the Commission determines in a proceeding . . . that any act or practice is unfair or deceptive, and issues a final cease and desist order” and the firm has “actual knowledge that such act or practice is unfair or deceptive and is unlawful.”  

What does that mean? In brief, if there’s an agency decision (not a consent order, but not a federal court decision either) that a certain type of conduct by one firm is “unfair or deceptive” under Section 5, then another firm can be assessed statutory monetary penalties if the Commission determines that it has undertaken the same type of conduct and if, because the firm has received a notice of penalty offenses, it has “actual knowledge that such act or practice is unfair or deceptive.” 

So, now we’re back to monetary penalties for violations of Section 5 in the first instance if a very special form of mens rea can be established. A notice of penalty offenses provides guidance, but it also carries real legal risk. 

Back to pesky questions and details. Do the letters provide notice? What might 700-plus disparate contemporary firms all do that fits a given course of unlawful conduct (at least as determined by administrative process)? To grab just a few examples among companies that begin with the letter “A”: what problematic conduct might be common to, e.g., Abbott Labs, Abercrombie & Fitch, Adidas, Adobe, Albertson’s, Altria, Amazon, and Annie’s (the organic-food company)?

Well, the letter (or the sample posted) points to all sorts of potentially helpful guidance about not running afoul of the law. But more specifically, the FTC points to eight administrative decisions that model the conduct (by other firms) already found to be unfair or deceptive. That, surely, is where the rubber hits the road and the details are specified. Or is it? 

The eight administrative decisions are an odd lot. Most of the matters have to do with manufacturers or packagers (or service providers) making materially false or misleading statements in advertising their products or services. 

The most recent case is In the Matter of Cliffdale Associates, a complaint filed in 1981 and decided by the commission in 1984. For those unfamiliar with Cliffdale (nearly everyone?), the defendant sold something “variously known as the Ball-Matic, the Ball-Matic Gas Saver Valve and the Gas Saver Valve.” The oldest decision, Wilbert W. Haase, was filed in 1939 and decided in 1941 (one of two decided during World War II).

The decisions make for interesting reading. For example, in R.J. Reynolds, we learn that:

…while as a general proposition the smoking of cigarettes in moderation by individuals not allergic nor hypersensitive to cigarette smoking, who are accustomed to smoking and are in normal good health, with no existing pathology of any of the bodily systems, is not appreciably harmful-what is normal for one person may be excessive for another.

I’ll confess: In my misspent youth, I did some research at the National Institutes of Health (NIH), but I did not know that.

Interesting reading but, dare I suggest, not super helpful from the standpoint of notice or guidance. R.J. Reynolds manufactured, advertised, and sold cigarettes and other tobacco products; and they advertised that “the effect that the smoking of its cigarettes was either beneficial to or not injurious to a particular bodily system.” So, “not appreciably harmful,” but that doesn’t mean therapeutic.

A few things stand out. First, all of the complaints were brought prior to the birth of the internet. Second, five of the eight complaints were brought before the 1975 Magnuson-Moss Act amendments to the FTC Act that, among other things, revised the standards for finding conduct “unfair or deceptive” under Section 5.  Third, having read the cases, I have no idea how the old cases are supposed to provide notice to the myriad recipients of these letters. 

Section 5 provides that “unfair methods of competition” and “unfair or deceptive acts or practices in or affecting commerce” are unlawful. Section 5(n)—courtesy of the 1975 amendments—qualifies the prohibition: 

The Commission shall have no authority under this section … to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. … the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

As Geoff Manne and I have noted, the amendment was adopted by a Congress that thought the FTC had been overreaching in its application of Section 5. Others have made (and expanded upon) the same observation: former FTC Chairman William Kovacic’s 2010 Senate testimony is one excellent example among many. Continued congressional frustration actually briefly led to a shutdown of the FTC. 

Here’s my take on the notice provided by the Notices of Penalty Authority: they might as well tell firms that the FTC has found that violating Section 5’s prohibition of unfair or deceptive acts or practices violates Section 5’s prohibition of unfair or deceptive acts or practices and (b) we’re not saying you violated Section 5, and we’re not saying you didn’t, but if you do violate Section 5, you’re subject to statutory monetary penalties, statutory and judicial impediments to monetary penalties notwithstanding.     

What sort of notice is that? Might the federal courts see this as an attempt at an end-run around statutory limits on the FTC’s authority? Might Congress? If you’re perplexed by the FTC’s mass notice action, which authority will provide you a guide?     

Under a recently proposed rule, the Federal Trade Commission (FTC) would ban the use of noncompete terms in employment agreements nationwide. Noncompetes are contracts that workers sign saying they agree to not work for the employer’s competitors for a certain period. The FTC’s rule would be a major policy change, regulating future contracts and retroactively voiding current ones. With limited exceptions, it would cover everyone in the United States.

When I scan academic economists’ public commentary on the ban over the past few weeks (which basically means people on Twitter), I see almost universal support for the FTC’s proposed ban. You see similar support if you expand to general econ commentary, like Timothy Lee at Full Stack Economics. Where you see pushback, it is from people at think tanks (like me) or hushed skepticism, compared to the kind of open disagreement you see on most policy issues.

The proposed rule grew out of an executive order by President Joe Biden in 2021, which I wrote about at the time. My argument was that there is a simple economic rationale for the contract: noncompetes encourage both parties to invest in the employee-employer relationship, just like marriage contracts encourage spouses to invest in each other.

Somehow, reposting my newsletter on the economic rationale for noncompetes has turned me into a “pro-noncompete guy” on Twitter.

The discussions have been disorienting. I feel like I’m taking crazy pills! If you ask me, “what new thing should policymakers do to address labor market power?” I would probably say something about noncompetes! Employers abuse them. The stories are devastating about people unable to find a new job because noncompetes bind them.

Yet, while recognizing the problems with noncompetes, I do not support the complete ban.

That puts me out of step with most vocal economics commentators. Where does this disagreement come from? How do I think about policy generally, and why am I the odd one out?

My Interpretation of the Research

One possibility is that I’m not such a lonely voice, and that the sample of vocal Twitter users is biased toward particular policy views. The University of Chicago Booth School of Business’ Initiative on Global Markets recently conducted a poll of academic economists  about noncompetes, which mostly finds differing opinions and levels of certainty about the effects of a ban. For example, 43% were uncertain that a ban would generate a “substantial increase in wages in the affected industries.” However, maybe that is because the word substantial is unclear. That’s a problem with these surveys.

Still, more economists surveyed agreed than disagreed. I would answer “disagree” to that statement, as worded.

Why do I differ? One cynical response would be that I don’t know the recent literature, and my views are outdated. From the research I’ve done for a paper that I’m writing on labor-market power, I’m fairly well-versed in the noncompete literature. I don’t know it better than the active researchers in the field, but better than the average economists responding to the FTC’s proposal and definitely better than most lawyers. My disagreement also isn’t about me being some free-market fanatic. I’m not, and some other free-market types are skeptical of noncompetes. My priors are more complicated (critics might say “confused”) than that, as I will explain below.

After much soul-searching, I’ve concluded that the disagreement is real and results from my—possibly weird—understanding of how we should go from the science of economics to the art of policy. That’s what I want to explain today and get us to think more about.

Let’s start with the literature and the science of economics. First, we need to know “the facts.” The original papers focused a lot on collecting data and facts about noncompetes. We don’t have amazing data on the prevalence of noncompetes, but we know something, which is more than we could say a decade ago. For example, Evan Starr, J.J. Prescott, & Norman Bishara (2021) conducted a large survey in which they found that “18 percent of labor force participants are bound by noncompetes, with 38 percent having agreed to at least one in the past.”[1] We need to know these things and thank the researchers for collecting data.

With these facts, we can start running regressions. In addition to the paper above, many papers develop indices of noncompete “enforceability” by state. Then we can regress things like wages on an enforceability index. Many papers—like Starr, Prescott, & Bishara above—run cross-state regressions and find that wages are higher in states with higher noncompete enforceability. They also find more training with noncompete enforceability. But that kind of correlation is littered with selection issues. High-income workers are more likely to sign noncompetes. That’s not causal. The authors carefully explain this, but sometimes correlations are the best we have—e.g., if we want to study noncompetes on doctors’ wages and their poaching of clients.

Some people will simply point to California (which has banned noncompetes for decades) and say, “see, noncompete bans don’t destroy an economy.” Unfortunately, many things make California unique, so while that is evidence, it’s hardly causal.

The most credible results come from recent changes in state policy. These allow us to run simple difference-in-difference types of analysis to uncover causal estimates. These results are reasonably transparent and easy to understand.

Michael Lipsitz & Evan Starr (2021) (are you starting to recognize that Starr name?) study a 2008 Oregon ban on noncompetes for hourly workers. They find the ban increased hourly wages overall by 2 to 3%, which implies that those signing noncompetes may have seen wages rise as much as 14 to 21%. This 3% number is what the FTC assumes will apply to the whole economy when they estimate a $300 billion increase in wages per year under their ban. It’s a linear extrapolation.

Similarly, in 2015, Hawaii banned noncompetes for new hires within tech industries. Natarajan Balasubramanian et al. (2022) find that the ban increased new-hire wages by 4%. They also estimate that the ban increased worker mobility by 11%. Labor economists generally think of worker turnover as a good thing. Still, it is tricky here when the whole benefit of the agreement is to reduce turnover and encourage a better relationship between workers and firms.

The FTC also points to three studies that find that banning noncompetes increases innovation, according to a few different measures. I won’t say anything about these because you can infer my reaction based on what I will say below on wage studies. If anything, I’m more skeptical of innovation studies, simply because I don’t think we have a good understanding of what causes innovation generally, let alone how to measure the impact of noncompetes on innovation. You can read what the FTC cites on innovation and make up your own mind.

From Academic Research to an FTC Ban

Now that we understand some of the papers, how do we move to policy?

Let’s assume I read the evidence basically as the FTC does. I don’t, and will explain as much in a future paper, but that’s not the debate for this post. How do we think about the optimal policy response, given the evidence?

There are two main reasons I am not ready to extrapolate from the research to the proposed ban. Every economist knows them: the dreaded pests of external validity and general equilibrium effects.

Let’s consider external validity through the Oregon ban paper and the Hawaii tech ban paper. Again, these are not critiques of the papers, but of how the FTC wants to move from them to a national ban.

Notice above that I said the Oregon ban went into effect in 2008, which means it happened as the whole country was entering a major recession and financial crisis. The authors do their best to deal with differential responses to the recession, but every state in their data went through a recession. Did the recession matter for the results? It seems plausible to me.

Another important detail about the Oregon ban is that it only applied to hourly workers, while the FTC rule would apply to all workers. You can’t just confidently assume hourly workers are just like salaried workers. Hourly workers who sign noncompetes are less likely to read them, less likely to consult with their family about them, and less likely to negotiate over them. If part of the problem with noncompetes is that people don’t understand them until it is too late, you will overstate the harm if you just look at hourly workers who understand noncompetes even less than salaried workers. Also, with a partial ban, Lipsitz & Starr recognize that spillovers matter and firms respond in different ways, such as converting workers to salaried to keep the noncompete, which won’t exist with a national ban. It’s not the same experiment at a national scale. Which way will it change? How confident are we?

The effects of the Hawaii ban are likely not the same as the FTC one would be. First of all, Hawaii is weird. It has a small population, and tech is a small part of the state’s economy. The ban even excluded telecom from within the tech sector. We are talking about a targeted ban. What does the Hawaii experiment tell us about a ban on noncompetes for tech workers in a non-island location like Boston? What does it tell us about a national ban on all noncompetes, like the FTC is proposing? Maybe these things do not matter. To further complicate things, the policy change included a ban on nonsolicitation clauses. Maybe the nonsolicitation clause was unimportant. But I’d want more research and more policy experimentation to tease out these details.

As you dig into these papers, you find more and more of these issues. That’s not a knock on the papers but an inherent difficulty in moving from research to policy. It’s further compounded by the fact that this empirical literature is still relatively new.

What will happen when we scale these bans up to the national level? That’s a huge question for any policy change, especially one as large as a national ban. The FTC seems confident in what will happen, but moving from micro to macro is not trivial. Macroeconomists are starting to really get serious about how the micro adds up to the macro, but it takes work.

I want to know more. Which effects are amplified when scaled? Which effects drop off? What’s the full National Income and Product Accounts (NIPA) accounting? I don’t know. No one does, because we don’t have any of that sort of price-theoretic, general equilibrium research. There are lots of margins that firms will adjust on. There’s always another margin that firms will adjust that we are not capturing. Instead, what the FTC did is a simple linear extrapolation from the state studies to a national ban. Studies find a 3% wage effect here. Multiply that by the number of workers.

When we are doing policy work, we would also like some sort of welfare analysis. It’s not just about measuring workers in isolation. We need a way to think about the costs and benefits and how to trade them off. All the diff-in-diff regressions in the world won’t get at it; we need a model.

Luckily, we have one paper that blends empirics and theory to do welfare analysis.[2] Liyan Shi has a paper forthcoming in Econometrica—which is no joke to publish in—titled “Optimal Regulation of Noncompete Contracts.” In it, she studies a model meant to capture the tradeoff between encouraging a firm’s investment in workers and reducing labor mobility. To bring the theory to data, she scrapes data on U.S. public firms from Securities and Exchange Commission filings and merges those with firm-level data from Compustat, plus some others, to get measures of firm investment in intangibles. She finds that when she brings her model to the data and calibrates it, the optimal policy is roughly a ban on noncompetes.

It’s an impressive paper. Again, I’m unsure how much to take from it to extrapolate to a ban on all workers. First, as I’ve written before, we know publicly traded firms are different from private firms, and that difference has changed over time. Second, it’s plausible that CEOs are different from other workers, and the relationship between CEO noncompetes and firm-level intangible investment isn’t identical to the relationship between mid-level engineers and investment in that worker.

Beyond particular issues of generalizing Shi’s paper, the larger concern is that this is the paper that does a welfare analysis. That’s troubling to me as a basis for a major policy change.

I think an analogy to taxation is helpful here. I’ve published a few papers about optimal taxation, so it’s an area I’ve thought more about. Within optimal taxation, you see this type of paper a lot. Here’s a formal model that captures something that theorists find interesting. Here’s a simple approach that takes the model to the data.

My favorite optimal-taxation papers take this approach. Take this paper that I absolutely love, “Optimal Taxation with Endogenous Insurance Markets” by Mikhail Golosov & Aleh Tsyvinski.[3] It is not a price-theory paper; it is a Theory—with a capital T—paper. I’m talking lemmas and theorems type of stuff. A bunch of QEDs and then calibrate their model to U.S. data.

How seriously should we take their quantitative exercise? After all, it was in the Quarterly Journal of Economics and my professors were assigning it, so it must be an important paper. But people who know this literature will quickly recognize that it’s not the quantitative result that makes that paper worthy of the QJE.

I was very confused by this early in my career. If we find the best paper, why not take the result completely seriously? My first publication, which was in the Journal of Economic Methodology, grew out of my confusion about how economists were evaluating optimal tax models. Why did professors think some models were good? How were the authors justifying that their paper was good? Sometimes papers are good because they closely match the data. Sometimes papers are good because they quantify an interesting normative issue. Sometimes papers are good because they expose an interesting means-ends analysis. Most of the time, papers do all three blended together, and it’s up to the reader to be sufficiently steeped in the literature to understand what the paper is really doing. Maybe I read the Shi paper wrong, but I read it mostly as a theory paper.

One difference between the optimal-taxation literature and the optimal-noncompete policy world is that the Golosov & Tsyvinski paper is situated within 100 years of formal optimal-taxation models. The knowledgeable scholar of public economics can compare and contrast. The paper has a lot of value because it does one particular thing differently than everything else in the literature.

Or think about patent policies, which was what I compared noncompetes to in my original post. There is a tradeoff between encouraging innovation and restricting monopoly. This takes a model and data to quantify the trade-off. Rafael Guthmann & David Rahman have a new paper on the optimal length of patents that Rafael summarized at Rafael’s Commentary. The basic structure is very similar to the Shi or Golosov &Tsyvinski papers: interesting models supplemented with a calibration exercise to put a number on the optimal policy. Guthmann & Rahman find four to eight years, instead of the current system of 20 years.

Is that true? I don’t know. I certainly wouldn’t want the FTC to unilaterally put the number at four years because of the paper. But I am certainly glad for their contribution to the literature and our understanding of the tradeoffs and that I can position that number in a literature asking similar questions.

I’m sorry to all the people doing great research on noncompetes, but we are just not there yet with them, by my reading. For studying optimal-noncompete policy in a model, we have one paper. It was groundbreaking to tie this theory to novel data, but it is still one welfare analysis.

My Priors: What’s Holding Me Back from the Revolution

In a world where you start without any thoughts about which direction is optimal (a uniform prior) and you observe one paper that says bans are net positive, you should think that bans are net positive. Some information is better than none and now you have some information. Make a choice.

But that’s not the world we live in. We all come to a policy question with prior beliefs that affect how much we update our beliefs.

For me, I have three slightly weird priors that I will argue you should also have but currently place me out of step with most economists.

First, I place more weight on theoretical arguments than most. No one sits back and just absorbs the data without using theory; that’s impossible. All data requires theory. Still, I think it is meaningful to say some people place more weight on theory. I’m one of those people.

To be clear, I also care deeply about data. But I write theory papers and a theory-heavy newsletter. And I think these theories matter for how we think about data. The theoretical justification for noncompetes has been around for a long time, as I discussed in my original post, so I won’t say more.

The second way that I differ from most economists is even weirder. I place weight on the benefits of existing agreements or institutions. The longer they have been in place, the more weight I place on the benefits. Josh Hendrickson and I have a paper with Alex Salter that basically formalized the argument from George Stigler that “every long-lasting institution is efficient.” When there are feedback mechanisms, such as with markets or democracy, the resulting institutions are the result of an evolutionary process that slowly selects more and more gains from trade. If they were so bad, people would get rid of them eventually. That’s not a free-market bias, since it also means that I think something like the Medicare system is likely an efficient form of social insurance and intertemporal bargaining for people in the United States.

Back to noncompetes, many companies use noncompetes in many different contexts. Many workers sign them. My prior is that they do so because a noncompete is a mutually beneficial contract that allows them to make trades in a world with transaction costs. As I explained in a recent post, Yoram Barzel taught us that, in a world with transaction costs, people will “erect social institutions to impose and enforce the restraints.”

One possible rebuttal is that noncompetes, while existing for a long time, have only become common in the past few decades. That is not very long-lasting, and so the FTC ban is a natural policy response to a new challenge that arose and the discovery that these contracts are actually bad. That response would persuade me more if this were a policy response brought about by a democratic bargain instead of an ideological agenda pushed by the chair of the FTC, which I think is closer to reality. That is Earl Thompson and Charlie Hickson’s spin on Stigler’s efficient institutions point. Ideology gets in the way.

Finally, relative to most economists, I place more weight on experimentation and feedback mechanisms. Most economists still think of the world through the lens of the benevolent planner doing a cost-benefit analysis. I do that sometimes, too, but I also think we need to really take our own informational limitations seriously. That’s why we talk about limited information all the time on my newsletter. Again, if we started completely agnostic, this wouldn’t point one way or another. We recognize that we don’t know much, but a slight signal pushes us either way. But when paired with my previous point about evolution, it means I’m hesitant about a national ban.

I don’t think the science is settled on lots of things that people want to tell us the science is settled on. For example, I’m not convinced we know markups are rising. I’m not convinced market concentration has skyrocketed, as others want to claim.

It’s not a free-market bias, either. I’m not convinced the Jones Act is bad. I’m not convinced it’s good, but Josh has convinced me that the question is complicated.

Because I’m not ready to easily say the science is settled, I want to know how we will learn if we are wrong. In a prior Truth on the Market post about the FTC rule, I quoted Thomas Sowell’s Knowledge and Decisions:

In a world where people are preoccupied with arguing about what decision should be made on a sweeping range of issues, this book argues that the most fundamental question is not what decision to make but who is to make it—through what processes and under what incentives and constraints, and with what feedback mechanisms to correct the decision if it proves to be wrong.

A national ban bypasses this and severely cuts off our ability to learn if we are wrong. That worries me.

Maybe this all means that I am too conservative and need to be more open to changing my mind. Maybe I’m inconsistent in how I apply these ideas. After all, “there’s always another margin” also means that the harm of a policy will be smaller than anticipated since people will adjust to avoid the policy. I buy that. There are a lot more questions to sort through on this topic.

Unfortunately, the discussion around noncompetes has been short-circuited by the FTC. Hopefully, this post gave you tools to think about a variety of policies going forward.


[1] The U.S. Bureau of Labor Statistics now collects data on noncompetes. Since 2017, we’ve had one question on noncompetes in the National Longitudinal Survey of Youth 1997. Donna S. Rothstein and Evan Starr (2021) also find that noncompetes cover around 18% of workers. It is very plausible this is an understatement, since noncompetes are complex legal documents, and workers may not understand that they have one.

[2] Other papers combine theory and empirics. Kurt Lavetti, Carol Simon, & William D. White (2023), build a model to derive testable implications about holdups. They use data on doctors and find noncompetes raise returns to tenure and lower turnover.

[3] It’s not exactly the same. The Golosov & Tsyvinski paper doesn’t even take the calibration seriously enough to include the details in the published version. Shi’s paper is a more serious quantitative exercise.

Just before Christmas, the European Commission published a draft implementing regulation (DIR) of the Digital Markets Act (DMA), establishing procedural rules that, in the Commission’s own words, seek to bolster “legal certainty,” “due process,” and “effectiveness” under the DMA. The rights of defense laid down in the draft are, alas, anemic. In the long run, this will leave the Commission’s DMA-enforcement decisions open to challenge on procedural grounds before the Court of Justice of the European Union (CJEU).

This is a loss for due process, for third parties seeking to rely on the Commission’s decisions, and for the effectiveness of the DMA itself.

Detailed below are some of the significant problems with the DIR, as well as suggestions for how to address them. Many of these same issues have been highlighted in the comments submitted by likely gatekeepers, law firms, and academics during the open-consultation period. You can also read the brief explainer that Dirk Auer & I wrote on the DIR here.

Access to File

The DIR establishes that parties have the right to access files that the Commission used to issue preliminary findings. But if parties wish to access other documents in the Commission’s file, they will need to submit a “substantiated request.” Among the problems with this approach is that the documents cited in the Commission’s preliminary reference will be of  limited use to defendants, as they are likely to be those used to establish an infringement, and thus unlikely to be exculpatory.

Moreover, as the CJEU has stated, it should not be up to the Commission alone to decide whether to disclose documents in the file. The Commission can preclude documents unrelated to the statement of objections from the administrative procedure, but that isn’t the same as excluding documents that aren’t mentioned in the statement of objections. After all, evidence might be irrelevant for the prosecution but relevant for the defense.

Parties’ right to be heard is unnecessarily circumscribed by requiring that they must “duly substantiate why access to a specific document or part thereof is necessary to exercise its right to be heard.” A party might be hard-pressed to argue convincingly that it needs access to a document based solely on a terse and vague description in the Commission’s file. More generally, why would a document be in the Commission’s file if it is not relevant to the case? The right to be heard cannot be respected where access to information is prohibited.

Solution: The DIR should allow gatekeepers full access to the Commission’s file. This is the norm in antitrust and merger proceedings in the EU where:

undertakings or associations of undertakings that receive a Statement of Objections have the right to see all the evidence, whether it is incriminating or exonerating, in the Commission’s investigation file. [bold in original]

 There is little sense in deviating from this standard in DMA proceedings.

No Role for the Hearing Officer

The DIR does not spell out a role for the hearing officer, a particularly jarring omission given the Commission’s history of acting as “judge, jury and executioner” in competition-law proceedings (see here, here and here). Hearing officers are a staple in antitrust (here and here), as well as in trade proceedings more generally, where their role is to enhance impartiality and objectivity by, e.g., resolving disputes over access to certain documents. As Alfonso Lamadrid has noted, an obvious inference to reach is that DMA proceedings before the Commission are to be less impartial and objective.

Solution: Grant the hearing officer a role in, at the very least, resolving access-to-file and confidentiality disputes.

Cap on the Length of Responses

The DIR establishes a 50-page limit on parties’ responses to the Commission’s preliminary findings. Of course, no such cap is imposed on the Commission in issuing its preliminary findings, designation decisions, and other decisions under the DMA. This imbalance between the Commission’s and respondents’ duties plainly violates the principle of equality of arms—a fundamental element of the right to a fair trial under Article 47 of the EU Charter of Fundamental Rights.

An arbitrary page limit also means that the Commission may not take all relevant facts and evidence into account in its decisions, which will be based largely on the preliminary findings and the related response. This lays the groundwork for subsequent challenges before the courts.

Solution: Either remove the cap on responses to preliminary findings or impose a similar limit on the Commission in issuing those findings.

A ‘Succinct’ Right to Speak

The DIR does not contemplate granting parties oral hearings to explain their defense more fully. Oral hearings are particularly important in cases involving complex and technical arguments and evidence.

While the right to a fair trial does not require oral hearings to be held in every case, “refusing to hold an oral hearing may be justified only in rare cases.” Given that, under the DMA, companies can be fined as much as 20% of their worldwide turnover, these proceedings involve severe financial penalties of a criminal or quasi-criminal nature (here and here), and are thus unlikely to qualify (here).

Solution: Grant parties the ability to request an oral hearing following the preliminary findings.

Legal Uncertainty

As one commenter put it, “the document is striking for what it leaves out.”  As Dirk Auer and I point out, the DIR leaves unanswered such questions as the precise role of third parties in DMA processes; the role of the advisory committee in decision making; whether the college of commissioners or just one commissioner is the ultimate decision maker; whether national authorities will be able to access data gathered by the Commission; and whether there is a role for the European Competition Network in coordinating and allocating cases between the EU and the member states.

Granted, not all of these questions needed to be answered in the DIR (although some—like the role of third parties—arguably should have been). Still, the sooner they are resolved, the better for everyone. 

Solution: Clarify the above questions—either with the final version of the implementing regulation or soon thereafter—in a manual of procedures or best-practice guidelines, as appropriate.

Conclusion

Unless substantive changes are made, the DIR in its current form risks running afoul of a well-established line of jurisprudence highlighting the importance of fundamental rights in antitrust law, which is guaranteed to apply in DMA proceedings as well. One of these is the general principle that judicial and administrative promptness cannot be attained at the expense of parties’ right of defense (here). Ignoring this would not only result in a loss for the rights of defense in the EU, but would also drive a wedge in the effectiveness of the DMA—thereby staining the Commission’s credibility.

Former U.S. Labor Secretary Gene Scalia games out the future of the Federal Trade Commission’s (FTC) recently proposed rule that would ban the use of most noncompete clauses in today’s Wall Street Journal. He writes that: 

The Federal Trade Commission’s ban on noncompete agreements may be the most audacious federal rule ever proposed. If finalized, it would outlaw terms in 30 million contracts and pre-empt laws in virtually every state. It would also, by the FTC’s own account, reduce capital investment, worker training and possibly job growth, while increasing the wage gap. The commission says the rule would deliver a meager 2.3% wage increase for hourly workers, versus a 9.4% increase for CEOs.

Three phases lie ahead for the proposal: rule-making, litigation and compliance. … The FTC is likely to finalize the rule within a year, to ensure the Biden administration can begin the task of defending it in the litigation phase. The proposal’s legal vulnerabilities are legion. …

Sketching the likely future of the proposed rule in this way is helpful. Most of those affected by this rule are unlikely to be familiar with the rulemaking process or the judicial process for reviewing agency rules; indeed, many are likely to hear coverage of the proposed rule and mistake it for a regulation that’s already in effect. The cost of that confusion is made clear by Scalia’s ultimate takeaway: that the courts are very likely to reject the rule (and perhaps the FTC’s authority to adopt these types of competition rules), but only after a protracted and lengthy judicial review process (including, quite possibly, a trip to the U.S. Supreme Court).

As Scalia explains, many employers will act upon this likely ill-fated rule out of fear or confusion, altering their employment contacts in ways that will be hard to later amend: 

Unfortunately, some employers may now reduce the benefits they offer in exchange for noncompetes, for fear the rule may eventually render the agreement unenforceable. But because the FTC may change aspects of the rule—and because the courts are likely to invalidate it—American businesses don’t need to invest now in complying with this deeply flawed proposal.

This should raise serious concern about the FTC’s approach to this issue. It is very likely that the Commission is aware of the rocky shoals that lie ahead. But it is also likely that the Commission knows that its posturing will affect the conduct of the business community. It’s not much of a leap to conclude that the Commission—that is, its three-member majority—is using its rulemaking process, not its substantive legal authority, as a norm entrepreneur, to jawbone the business community and move the Overton window that frames discussion of noncompete clauses. I feel dirty writing a sentence as jargon-filled as that one, but no dirtier than the Commission should feel for abusing rulemaking procedures to achieve substantive ends beyond its legal authority.

This concern resembles an issue currently before the Supreme Court: Axon Enterprises v. FTC, another case that involves the FTC. Generally, agency actions cannot be challenged in federal court until the agency has finalized its action and affected parties have exhausted their appeals before the agency. Indeed, the statutes that govern some agencies (including the FTC) have provisions that have been interpreted as preventing challenges to the agency’s authority from being brought before a federal district court.

In Axon, the Supreme Court is considering whether a company subject to administrative proceedings before the Commission can challenge the constitutionality of those proceedings in district court prior to their completion. Oral arguments were heard this past November and, while reading tea leaves based upon oral arguments is a fraught endeavor, those arguments did not seem to go well for the FTC. It seems likely that the Court will allow firms to raise such challenges prior to final agency action in adjudication, precisely because not allowing them allows the Commission to cause non-redressable harms to the firms it investigates; several years of unconstitutional litigation can be devastating to a business.

The Axon case involves adjudication against a single firm, which raises some different issues from those raised when an agency is developing rules that will affect an entire industry. Most notably, constitutional Due Process protections are implicated when the government takes action against a single firm. It is unlikely that the outcome in Axon—even if as adverse to the FTC as foreseeably possible—would extend to allow firms to challenge an agency rulemaking process on the ground that it exceeds the agency’s statutory (not even constitutional) authority.

But the Commission should nonetheless take the concerns at issue in Axon to heart. If the Supreme Court rules against the Commission in Axon, it will be a strong signal that the Court has concerns about how the Commission is using the authority that Congress has given it. One could even say that it will be the latest in a series of such signals, given that the Court recently struck down the Commission’s Section 13(b) civil-penalty authority. As Scalia notes, the Commission is already pushing the outermost limits of its statutory authority with the rule that it has proposed. The extent of the coming judicial (or congressional) rebuke will be greatly expanded if the courts feel that the agency has abused the rulemaking process to achieve substantive goals that exceed that outermost limit.

Happy New Year? Right, Happy New Year! 

The big news from the Federal Trade Commission (FTC) is all about noncompetes. From what were once the realms of labor and contract law, noncompetes are terms in employment contracts that limit in various ways the ability of an employee to work at a competing firm after separation from the signatory firm. They’ve been a matter of increasing interest to economists, policymakers, and enforcers for several reasons. For one, there have been prominent news reports of noncompetes used in dubious places; the traditional justifications for noncompetes seem strained when applied to low-wage workers, so why are we reading about noncompetes binding sandwich-makers at Jimmy John’s? 

For another, there’s been increased interest in the application of antitrust to labor markets more generally. One example among many: a joint FTC/U.S. Justice Department workshop in December 2021.

Common-law cases involving one or another form of noncompete go back several hundred years. So, what’s new? First, on Jan. 4, the FTC announced settlements with three firms regarding their use of noncompetes, which the FTC had alleged to violate Section 5. These are consent orders, not precedential decisions. The complaints were, presumably, based on rule-of-reason analyses of facts, circumstances, and effects. On the other hand, the Commission’s recent Section 5 policy statement seemed to disavow the time-honored (and Supreme-Court-affirmed) application of the rule of reason. I wrote about it here, and with Gus Hurwitz here. My ICLE colleagues Dirk Auer, Brian Albrecht, and Jonathan Barnett did too, among others. 

The Commission’s press release seemed awfully general:

Noncompete restrictions harm both workers and competing businesses. For workers, noncompete restrictions lead to lower wages and salaries, reduced benefits, and less favorable working conditions. For businesses, these restrictions block competitors from entering and expanding their businesses.

Always? Distinct facts and circumstances? Commissioner Christine Wilson noted the brevity of the statement in her dissent

…each Complaint runs three pages, with a large percentage of the text devoted to boilerplate language. Given how brief they are, it is not surprising that the complaints are woefully devoid of details that would support the Commission’s allegations. In short, I have seen no evidence of anticompetitive effects that would give me reason to believe that respondents have violated Section 5 of the FTC Act. 

She did not say that the noncompetes were fine. In a separate statement regarding one of the matters, she noted that various aspects of noncompetes imposed on security guards (running two years from termination of employment, with $10,000 liquidated damages for breach) had been found unreasonable by a state court, and therefore unenforceable under Michigan law. That seemed to her “reasonable.” I’m no expert on Michigan state law, but those terms seem to me suspect under general standards of reasonability. Whether there was a federal antitrust violation is far less clear.    

One more clue–and even bigger news–came the very next day: the Commission published a notice of proposed rulemaking (NPRM) proposing to ban the use of noncompetes in general. Subject to a limited exception for the sale of a business, noncompetes would be deemed violative of Section 5 across occupations, income levels, and industries. That is, the FTC proposed to regulate the terms of employment agreements for nearly the whole of the U.S. labor force. Step aside federal and state labor law (and the U.S. Labor Department and Congress); and step aside ongoing and active statutory experimentation on noncompete enforcement in the states. 

So many questions. There are reasons to wonder about many noncompetes. They do have the potential to solve holdup problems for firms that might otherwise underinvest in employee training and might undershare trade secrets or other proprietary information. But that’s not much of an explanation for restrictions on a counter person at a sub shop, and I’m pretty suspicious of the liquidated damages provision in the security-guards matter. Credible economic studies raise concerns, as well. 

Still, this is an emerging area of study, and many positive contributions to it (like the one linked just now, and this) illustrate research challenges that remain. An FTC Bureau of Economics working paper (oddly not cited in the 215-page NPRM) reviews the body of literature, observing that results are mixed, and that many of the extant studies have shortcomings. 

For similar reasons, comments submitted to an FTC workshop on noncompetes by the Antitrust Section of the American Bar Association said that cross-state variations in noncompete law “are seemingly justified, as the views and literature on non-compete clauses (and restrictive covenants in employment contracts generally) are mixed.”

So here are a few more questions that cannot possibly be resolved in a single blog post:

  1. Does the FTC have the authority to issue substantive (“legislative”) competition regulations? 
  2. Would a regulation restricting a common contracting practice across all occupations, industries, and income levels raise the major questions doctrine? (Ok, skipping ahead: Yes.)
  3. Does it matter, for the major questions doctrine or otherwise, that there’s a substantial body of federal statutory law regarding labor and employment and a federal agency (a good deal larger than the FTC) charged to enforce the law?
  4. Does it matter that the FTC simply doesn’t have the personnel (or congressionally appropriated budget) to enforce such a sweeping regulation?
    • Is the number of experienced labor lawyers currently employed as staff in the FTC’s Bureau of Competition nonzero? If so, what is it? 
  5. Does it matter that this is an active area of state-level legislation and enforcement?
  6. Do the effects of noncompetes vary as the terms of noncompetes vary, as suggested in the ABA comments linked above? And if so, on what dimensions?
    • Do the effects vary according to the market power of the employer in local (or other geographically relevant) labor markets and, if so, should that matter to an antitrust enforcer?
    • If the effects vary significantly, is a one-size-fits-all regulation the best path forward?
  7. Many published studies seem to report average effects of policy changes on, e.g., wages or worker mobility for some class of workers. Should we know more about the distribution of those effects before the FTC (or anyone else) adopts uniform federal regulations? 
  8. How well do we know the answer to the myriad questions raised by noncompetes? As the FTC working paper observes, many published studies seem to rely heavily on survey evidence on the incidence of noncompetes. Prior to adopting  a sweeping competition regulation, should the FTC use its 6b subpoena authority to gather direct evidence? Why hasn’t it?
  9. The FTC’s Bureau of Economics employs a large expert staff of research economists. Given the questions raised by the FTC Working Paper, how else might the FTC contribute to the state of knowledge of noncompete usage and effects before adopting a sweeping, nationwide prohibition? Are there lacunae in the literature that the FTC could fill? For example, there seem to be very few papers regarding the downstream effects on consumers, which might matter to consumers. And while we’re in labor markets, what about the relationship between noncompetes and employment? 

Well, that’s a lot. In my defense, I’ll  note that the FTC’s November 2022 Advance Notice of Proposed Rulemaking on “commercial surveillance” enumerated 95 complex questions for public comment. Which is more than nine. 

I didn’t even get to the once-again dismal ratings of FTC’s senior agency leadership in the 2022 OPM Federal Employee Viewpoint Survey. Last year’s results were terrible—a precipitous drop from 2020. This year’s results were worse. Worse yet, they show that last year’s results were not mere transient deflation in morale. But a discussion will have to wait for another blog post.

Later next month, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google LLC, a case that has drawn significant attention and many bad takes regarding how Section 230 of the Communications Decency Act should be interpreted. Enacted in the mid-1990s, when the Internet as we know it was still in its infancy, Section 230 has grown into a law that offers online platforms a fairly comprehensive shield against liability for the content that third parties post to their services. But the law has also come increasingly under fire, from both the political left and the right. 

At issue in Gonzalez is whether Section 230(c)(1) immunizes Google from a set of claims brought under the Antiterrorism Act of 1990 (ATA). The petitioners are relatives of Nohemi Gonzalez, an American citizen murdered in a 2015 terrorist attack in Paris. They allege that Google, through YouTube, is liable under the ATA for providing assistance to ISIS for four main reasons. They allege that: 

  1. Google allowed ISIS to use YouTube to disseminate videos and messages, thereby recruiting and radicalizing terrorists responsible for the murder.
  2. Google failed to take adequate steps to take down videos and accounts and keep them down.
  3. Google recommends videos of others, both through subscriptions and algorithms.
  4. Google monetizes this content through its AdSense service, with ISIS-affiliated users receiving revenue. 

The 9th U.S. Circuit Court of Appeals dismissed all of the non-revenue-sharing claims as barred by Section 230(c)(1), but allowed the revenue-sharing claim to go forward. 

Highlights of DOJ’s Brief

In an amicus brief, the U.S. Justice Department (DOJ) ultimately asks the Court to vacate the 9th Circuit’s judgment regarding those claims that are based on YouTube’s alleged targeted recommendations of ISIS content. But the DOJ also rejects much of the petitioner’s brief, arguing that Section 230 does rightfully apply to the rest of the claims. 

The crux of the DOJ’s brief concerns when and how design choices can be outside of Section 230 immunity. The lodestar 9th Circuit case that the DOJ brief applies is 2008’s Fair Housing Council of San Fernando Valley v. Roommates.com.

As the DOJ notes, radical theories advanced by the plaintiffs and other amici would go too far in restricting Section 230 immunity based on a platform’s decisions on whether or not to block or remove user content (see, e.g., its discussion on pp. 17-21 of the merits and demerits of Justice Clarence Thomas’s Malwarebytes concurrence).  

At the same time, the DOJ’s brief notes that there is room for a reasonable interpretation of Section 230 that allows for liability to attach when online platforms behave unreasonably in their promotion of users’ content. Applying essentially the 9th Circuit’s Roommates.com standard, the DOJ argues that YouTube’s choice to amplify certain terrorist content through its recommendations algorithm is a design choice, rather than simply the hosting of third-party content, thereby removing it from the scope of  Section 230 immunity.  

While there is much to be said in favor of this approach, it’s important to point out that, although directionally correct, it’s not at all clear that a Roommates.com analysis should ultimately come down as the DOJ recommends in Gonzalez. More broadly, the way the DOJ structures its analysis has important implications for how we should think about the scope of Section 230 reform that attempts to balance accountability for intermediaries with avoiding undue collateral censorship.

Charting a Middle Course on Immunity

The important point on which the DOJ relies from Roommates.com is that intermediaries can be held accountable when their own conduct creates violations of the law, even if it involves third–party content. As the DOJ brief puts it:

Section 230(c)(1) protects an online platform from claims premised on its dissemination of third-party speech, but the statute does not immunize a platform’s other conduct, even if that conduct involves the solicitation or presentation of third-party content. The Ninth Circuit’s Roommates.com decision illustrates the point in the context of a website offering a roommate-matching service… As a condition of using the service, Roommates.com “require[d] each subscriber to disclose his sex, sexual orientation and whether he would bring children to a household,” and to “describe his preferences in roommates with respect to the same three criteria.” Ibid. The plaintiffs alleged that asking those questions violated housing-discrimination laws, and the court of appeals agreed that Section 230(c)(1) did not shield Roommates.com from liability for its “own acts” of “posting the questionnaire and requiring answers to it.” Id. at 1165.

Imposing liability in such circumstances does not treat online platforms as the publishers or speakers of content provided by others. Nor does it obligate them to monitor their platforms to detect objectionable postings, or compel them to choose between “suppressing controversial speech or sustaining prohibitive liability.”… Illustrating that distinction, the Roommates.com court held that although Section 230(c)(1) did not apply to the website’s discriminatory questions, it did shield the website from liability for any discriminatory third-party content that users unilaterally chose to post on the site’s “generic” “Additional Comments” section…

The DOJ proceeds from this basis to analyze what it would take for Google (via YouTube) to no longer benefit from Section 230 immunity by virtue of its own editorial actions, as opposed to its actions as a publisher (which 230 would still protect). For instance, are the algorithmic suggestions of videos simply neutral tools that allow for users to get more of the content they desire, akin to search results? Or are the algorithmic suggestions of new videos a design choice that makes it akin to Roommates?

The DOJ argues that taking steps to better display pre-existing content is not content development or creation, in and of itself. Similarly, it would be a mistake to make intermediaries liable for creating tools that can then be deployed by users:

Interactive websites invariably provide tools that enable users to create, and other users to find and engage with, information. A chatroom might supply topic headings to organize posts; a photo-sharing site might offer a feature for users to signal that they like or dislike a post; a classifieds website might enable users to add photos or maps to their listings. If such features rendered the website a co-developer of all users’ content, Section 230(c)(1) would be a dead letter.

At a high level, this is correct. Unfortunately, the DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did. Toward this end, the brief notes that:

The distinction between a recommendation and the recommended content is particularly clear when the recommendation is explicit. If YouTube had placed a selected ISIS video on a user’s homepage alongside a message stating, “You should watch this,” that message would fall outside Section 230(c)(1). Encouraging a user to watch a selected video is conduct distinct from the video’s publication (i.e., hosting). And while YouTube would be the “publisher” of the recommendation message itself, that message would not be “information provided by another information content provider.” 47 U.S.C. 230(c)(1).

An Absence of Immunity Does Not Mean a Presence of Liability

Importantly, the DOJ brief emphasizes throughout that remanding the ATA claims is not the end of the analysis—i.e., it does not mean that the plaintiffs can prove the elements. Moreover, other background law—notably, the First Amendment—can limit the application of liability to intermediaries, as well. As we put it in our paper on Section 230 reform:

It is important to again note that our reasonableness proposal doesn’t change the fact that the underlying elements in any cause of action still need to be proven. It is those underlying laws, whether civil or criminal, that would possibly hold intermediaries liable without Section 230 immunity. Thus, for example, those who complain that FOSTA/SESTA harmed sex workers by foreclosing a safe way for them to transact (illegal) business should really be focused on the underlying laws that make sex work illegal, not the exception to Section 230 immunity that FOSTA/SESTA represents. By the same token, those who assert that Section 230 improperly immunizes “conservative bias” or “misinformation” fail to recognize that, because neither of those is actually illegal (nor could they be under First Amendment law), Section 230 offers no additional immunity from liability for such conduct: There is no underlying liability from which to provide immunity in the first place.

There’s a strong likelihood that, on remand, the court will find there is no violation of the ATA at all. Section 230 immunity need not be stretched beyond all reasonable limits to protect intermediaries from hypothetical harms when underlying laws often don’t apply. 

Conclusion

To date, the contours of Section 230 reform largely have been determined by how courts interpret the statute. There is an emerging consensus that some courts have gone too far in extending Section 230 immunity to intermediaries. The DOJ’s brief is directionally correct, but the Court should not adopt it wholesale. More needs to be done to ensure that the particular facts of Gonzalez are not used to completely gut Section 230 more generally.  

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

The Federal Trade Commission’s (FTC) Jan. 5 “Notice of Proposed Rulemaking on Non-Compete Clauses” (NPRMNCC) is the first substantive FTC Act Section 6(g) “unfair methods of competition” rulemaking initiative following the release of the FTC’s November 2022 Section 5 Unfair Methods of Competition Policy Statement. Any final rule based on the NPRMNCC stands virtually no chance of survival before the courts. What’s more, this FTC initiative also threatens to have a major negative economic-policy impact. It also poses an institutional threat to the Commission itself. Accordingly, the NPRMNCC should be withdrawn, or as a “second worst” option, substantially pared back and recast.

The NPRMNCC is succinctly described, and its legal risks ably summarized, in a recent commentary by Gibson Dunn attorneys: The proposal is sweeping in its scope. The NPRMNCC states that it “would, among other things, provide that it is an unfair method of competition for an employer to enter into or attempt to enter into a non-compete clause with a worker; to maintain with a worker a non-compete clause; or, under certain circumstances, to represent to a worker that the worker is subject to a non-compete clause.”

The Gibson Dunn commentary adds that it “would require employers to rescind all existing non-compete provisions within 180 days of publication of the final rule, and to provide current and former employees notice of the rescission.‎ If employers comply with these two requirements, the rule would provide a safe harbor from enforcement.”‎

As I have explained previously, any FTC Section 6(g) rulemaking is likely to fail as a matter of law. Specifically, the structure of the FTC Act indicates that Section 6(g) is best understood as authorizing procedural regulations, not substantive rules. What’s more, Section 6(g) rules raise serious questions under the U.S. Supreme Court’s nondelegation and major questions doctrines (given the breadth and ill-defined nature of “unfair methods of competition”) and under administrative law (very broad unfair methods of competition rules may be deemed “arbitrary and capricious” and raise due process concerns). The cumulative weight of these legal concerns “makes it highly improbable that substantive UMC rules will ultimately be upheld.

The legal concerns raised by Section 6(g) rulemaking are particularly acute in the case of the NPRMNCC, which is exceedingly broad and deals with a topic—employment-related noncompete clauses—with which the FTC has almost no experience. FTC Commissioner Christine Wilson highlights this legal vulnerability in her dissenting statement opposing issuance of the NPRMNCC.

As Andrew Mercado and I explained in our commentary on potential FTC noncompete rulemaking: “[a] review of studies conducted in the past two decades yields no uniform, replicable results as to whether such agreements benefit or harm workers.” In a comprehensive literature review made available online at the end of 2019, FTC economist John McAdams concluded that “[t]here is little evidence on the likely effects of broad prohibitions of non-compete agreements.” McAdams also commented on the lack of knowledge regarding the effects that noncompetes may have on ultimate consumers. Given these realities, the FTC would be particularly vulnerable to having a court hold that a final noncompete rule (even assuming that it somehow surmounted other legal obstacles) lacked an adequate factual basis, and thus was arbitrary and capricious.

The poor legal case for proceeding with the NPRMNCC is rendered even weaker by the existence of robust state-law provisions concerning noncompetes in almost every state (see here for a chart comparing state laws). Differences in state jurisprudence may enable “natural experimentation,” whereby changes made to state law that differ across jurisdictions facilitate comparisons of the effects of different approaches to noncompetes. Furthermore, changes to noncompete laws in particular states that are seen to cause harm, or generate benefits, may allow “best practices” to emerge and thereby drive welfare-enhancing reforms in multiple jurisdictions.

The Gibson Dunn commentary points out that, “[a]s a practical matter, the proposed [FTC noncompete] rule would override existing non-compete requirements and practices in the vast majority of states.” Unfortunately, then, the NPRMNCC would largely do away with the potential benefits of competitive federalism in the area of noncompetes. In light of that, federal courts might well ask whether Congress meant to give the FTC preemptive authority over a legal field traditionally left to the states, merely by making a passing reference to “mak[ing] rules and regulations” in Section 6(g) of the FTC Act. Federal judges would likely conclude that the answer to this question is “no.”

Economic Policy Harms

How much economic harm could an FTC rule on noncompetes cause, if the courts almost certainly would strike it down? Plenty.

The affront to competitive federalism, which would prevent optimal noncompete legal regimes from developing (see above), could reduce the efficiency of employment contracts and harm consumer welfare. It would be exceedingly difficult (if not impossible) to measure such harms, however, because there would be no alternative “but-for” worlds with differing rules that could be studied.

The broad ban on noncompetes predictably will prevent—or at least chill—the use of noncompete clauses to protect business-property interests (including trade secrets and other intellectual-property rights) and to protect value-enhancing investments in worker training. (See here for a 2016 U.S. Treasury Department Office of Economic Policy Report that lists some of the potential benefits of noncompetes.) The NPRMNCC fails to account for those and other efficiencies, which may be key to value-generating business-process improvements that help drive dynamic economic growth. Once again, however, it would be difficult to demonstrate the nature or extent of such foregone benefits, in the absence of “but-for” world comparisons.

Business-litigation costs would also inevitably arise, as uncertainties in the language of a final noncompete rule were worked out in court (prior to the rule’s legal demise). The opportunity cost of firm resources directed toward rule-related issues, rather than to business-improvement activities, could be substantial. The opportunity cost of directing FTC resources to wasteful noncompete-related rulemaking work, rather than potential welfare-enhancing endeavors (such as anti-fraud enforcement activity), also should not be neglected.

Finally, the substantial error costs that would attend designing and seeking to enforce a final FTC noncompete rule, and the affront to the rule of law that would result from creating a substantial new gap between FTC and U.S. Justice Department competition-enforcement regimes, merits note (see here for my discussion of these costs in the general context of UMC rulemaking).

Conclusion

What, then, should the FTC do? It should withdraw the NPRMNCC.

If the FTC is concerned about the effects of noncompete clauses, it should commission appropriate economic research, and perhaps conduct targeted FTC Act Section 6(b) studies directed at noncompetes (focused on industries where noncompetes are common or ubiquitous). In light of that research, it might be in position to address legal policy toward noncompetes in competition advocacy before the states, or in testimony before Congress.

If the FTC still wishes to engage in some rulemaking directed at noncompete clauses, it should consider a targeted FTC Act Section 18 consumer-protection rulemaking (see my discussion of this possibility, here). Unlike Section 6(g), the legality of Section 18 substantive rulemaking (which is directed at “unfair or deceptive acts or practices”) is well-established. Categorizing noncompete-clause-related practices as “deceptive” is plainly a nonstarter, so the Commission would have to bases its rulemaking on defining and condemning specified “unfair acts or practices.”

Section 5(n) of the FTC Act specifies that the Commission may not declare an act or practice to be unfair unless it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” This is a cost-benefit test that plainly does not justify a general ban on noncompetes, based on the previous discussion. It probably could, however, justify a properly crafted narrower rule, such as a requirement that an employer notify its employees of a noncompete agreement before they accept a job offer (see my analysis here).  

Should the FTC nonetheless charge forward and release a final competition rule based on the NPRMNCC, it will face serious negative institutional consequences. In the previous Congress, Sens. Mike Lee (R-Utah) and Chuck Grassley (R-Iowa) have introduced legislation that would strip the FTC of its antitrust authority (leaving all federal antitrust enforcement in DOJ hands). Such legislation could gain traction if the FTC were perceived as engaging in massive institutional overreach. An unprecedented Commission effort to regulate one aspect of labor contracts (noncompete clauses) nationwide surely could be viewed by Congress as a prime example of such overreach. The FTC should keep that in mind if it values maintaining its longstanding role in American antitrust-policy development and enforcement.