In February’s FTC roundup, I noted an op-ed in the Wall Street Journal in which Commissioner Christine Wilson announced her intent to resign from the Federal Trade Commission. Her departure, and her stated reasons therefore, were not encouraging for those of us who would prefer to see the FTC function as a stable, economically grounded, and genuinely bipartisan independent agency. Since then, Wilson has specified her departure date: March 31, two weeks hence.
With Wilson’s departure, and that of Commissioner Noah Phillips in October 2022 (I wrote about that here, and I recommend Alden Abbott’s post on Noah Phillips’ contribution to the 1-800 Contacts case), we’ll have a strictly partisan commission—one lacking any Republican commissioners or, indeed, anyone who might properly be described as a moderate or mainstream antitrust lawyer or economist. We shall see what the appointment process delivers and when; soon, I hope, but I’m not holding my breath.
Next Comes Exodus
As followers of the FTC—faithful, agnostic, skeptical, or occasional—are all aware, the commissioners have not been alone in their exodus. Not a few staffers have left the building.
In a Bloomberg column just yesterday, Dan Papscun covers the scope of the departures, “at a pace not seen in at least two decades.” Based on data obtained from a Bloomberg Freedom of Information Act request, Papscun notes the departure of “99 senior-level career attorneys” from 2021-2022, including 71 experienced GS-15 level attorneys and 28 from the senior executive service.
To put those numbers in context, this left the FTC—an agency with dual antitrust and consumer-protection authority ranging over most of domestic commerce—with some 750 attorneys at the end of 2022. That’s a decent size for a law firm that lacks global ambitions, but a little lean for the agency. Papscun quotes Debbie Feinstein, former head of the FTC’s Bureau of Competition during the Obama administration: “You lose a lot of institutional knowledge” with the departure of senior staff and career leaders. Indeed you do.
Onward and Somewhere
The commission continues to scrutinize noncompete terms in employment agreements by bringing cases, even as it entertains comments on its proposal to ban nearly all such terms by regulation (see here, here, here, here, here, here, here, here, and here for “a few” ToTM posts on the proposal). As I noted before, the NPRM cites three recent settlements of Section 5 cases against firms’ use of noncompetes as a means of documenting the commission’s experience with such terms. It’s important to define one’s terms clearly. By “cases,” I mean administrative complaints resolved by consent orders, with no stipulation of any antitrust violation, rather than cases litigated to their conclusion in federal court. And by “recent,” I mean settlements announced the very day before the publication of the NPRM.
Also noted was the brevity of the complaints, and the memoranda and orders memorializing the settlements.It’s entirely possible that the FTC’s allegations in one, two, or all of the matters were correct, but based on the public documents, it’s hard to tell how the noncompetes violated Section 5. Commissioner Wilson noted as much in her dissents (here and here).
On March 15, the FTC’s record on noncompete cases grew by a third; that is, the agency announced a fourth settlement (again in an administrative process, and again without a decision on the merits or a stipulation of an antitrust violation). Once again, the public documents are . . . compact, providing little by way of guidance as to how (in the commission’s view), the specific terms of the agreements violated Section 5 (of course, if—as suggested in the NPRM—all such terms violate Section 5, then there you go). Again, Commissioner Wilson noticed.
Here’s a wrinkle: the staff do seem to be building on their experience regarding the use of noncompete terms in the glass container industry. Of the four noncompete competition matters now settled (all this year), three—including the most recent—deal with firms in the glass-container industry, which, according to the allegations, is highly concentrated (at least in its labor markets). The NPRM asked for input on its sweeping proposed rule, but it also asked for input on possible regulatory alternatives. A smarter aleck than myself might suggest that they consider regulating the use of noncompetes in the glass-container industry, given the commission’s burgeoning experience in this specific labor market (or markets).
Someone Deserves a Break Today
The commission’s foray into labor matters continues, with a request for information (RFI) on “the means by which franchisors exert control over franchisees and their workers.” On the one hand, the commission has a longstanding consumer-protection interest in the marketing of franchises, enforcing its Franchise Rule, which was first adopted in 1978 and amended in 2007. The rule chiefly requires certain disclosures—23 of them—in marketing franchise opportunities to potential franchisees. Further inquiry into the operation of the rule, and recent market developments, could be part of the normal course of regulatory business.
But this is not exactly that. The RFI raises a panoply of questions about both competition and consumer-protection issues, well beyond the scope of the rule, that may pertain to franchise businesses. It asks, among other things, how the provisions of franchise agreements “affects franchisees, consumers, workers, and competition, or . . . any justifications for such provision[s].” Working its way back to noncompetes:
The FTC is currently seeking public comment on a proposed rule to ban noncompete clauses for workers in some situations. As part of that proposed rulemaking, the FTC is interested in public comments on the question of whether that proposed rule should also apply to noncompete clauses between franchisors and franchisees.
As Alden Abbott observed, franchise businesses represent a considerable engine of economic growth. That’s not to say that a given franchisor cannot run afoul of either antitrust or consumer-protection law, but it does suggest that there are considerable positive aspects to many franchisor/franchisee relationships, and not just potential harms.
If that’s right, one might wonder whether the commission’s litany of questions about “the means by which franchisors exert control over franchisees and their workers” represents a neutral inquiry into a complex class of business models employed in diverse industries. If you’re still wondering, Elizabeth Wilkins, director of the FTC’s Office of Policy Planning (full disclosure, she was my boss for a minute, and, in my opinion, a good manager) issued a spoiler alert: “This RFI will begin to unravel how the unequal bargaining power inherent in these contracts is impacting franchisees, workers, and consumers.” What could be more neutral than that?
The RFI also seeks input on the use of intra-franchise no-poach agreements, a relatively narrow but still significant issue for franchise brand development. More about us: a recent amicus brief filed by the International Center for Law & Economics and 20 scholars of antitrust law and economics (including your humble scribe, but also, and not for nothin’, a Nobel laureate), explains some of the pro-competitive potential of such agreements, both generally and with a focus on a specific case, Delandes v. McDonald’s.
It’s here, if you or the commission are interested.
Franchising plays a key role in promoting American job creation and economic growth. As explained in Forbes (hyperlinks omitted):
Franchise businesses help drive growth in local, state and national economies. They are major contributors to small business growth and job creation in nearly every local economy in the United States. On a local level, growth is spurred by a number of successful franchise impacts, including multiple new locations opening in the area and the professional development opportunities they provide for the workforce.
Franchises Create Jobs
What kind of impact do franchises have on national economic data and job growth? All in all, small businesses like franchises generate more than 60 percent of all jobs added annually in the U.S., according to the Bureau of Labor Statistics.
Although it varies widely by state, you will often find that the highest job creation market leaders are heavily influenced by franchising growth. The national impact of franchising, according to the IFA Economic Impact Study conducted by IHS Market Economics in January 2018, is huge.
By the numbers:
There are 733,000 franchised establishments in the Unites States
Franchising directly creates 7.6 million jobs
Franchising indirectly supports 13.3 million jobs
Franchising directly accounts for $404.6 billion in GDP
Franchising indirectly accounts for $925.9 billion in GDP
Franchises Drive Economic Growth
How do franchises spur economic growth? Successful franchise brands can grow new locations at a faster rate than other types of small businesses. Individual franchise locations create jobs, and franchise networks multiply the jobs they create by replicating in more markets — or often in more locations in a single market if demand allows. The more they succeed, the greater the multiplier.
It’s also a matter of longevity. According to the Small Business Administration (SBA), 50 percent of new businesses fail during the first five years. Franchises can offer greater sustainability than non-franchised businesses. Franchises are much more likely to be operating after five years. This means more jobs being created longer for each location opened.
Successful franchise brands help stack the deck in favor of success by offering substantial administrative and marketing support for individual locations. Success for the brands means success for the overall economy, driving a virtuous cycle of growth.
Franchising as a business institution is oriented toward reducing economic inefficiencies in commercial relationships. Specifically, economic analysis reveals that it is a potential means for dealing with opportunism and cabining transaction costs in vertical-distribution contracts. In a survey article in the Encyclopedia of Law & Economics, Antony Dnes explores capital raising, agency, and transactions-cost-control theories of franchising. He concludes:
Several theories have been constructed to explain franchising, most of which emphasize savings of monitoring costs in an agency framework. Details of the theories show how opportunism on the part of both franchisors and franchisees may be controlled. In separate developments, writers have argued that franchisors recruit franchisees to reduce information-search costs, or that they signal franchise quality by running company stores.
Empirical studies tend to support theories emphasizing opportunism on the part of franchisors and franchisees. Thus, elements of both agency approaches and transactions-cost analysis receive support. The most robust finding is that franchising is encouraged by factors like geographical dispersion of units, which increases monitoring costs. Other key findings are that small units and measures of the importance of the franchisee’s input encourage franchising, whereas increasing the importance of the franchisor’s centralized role encourages the use of company stores. In many key respects, in result although not in principle, transaction-cost analysis and agency analysis are just two different languages describing the same franchising phenomena.
In short, overall, franchising has proven to be an American welfare-enhancement success story.
There is, however, a three-letter regulatory storm cloud on the horizon that could eventually threaten to undermine economically beneficial franchising. In a March 10 press release, the Federal Trade Commission (FTC) “requests [public] comment[s] on franchise agreements and franchisor business practices, including how franchisors may exert control over franchisees and their workers.” The public will have 60 days to submit comments in response to this request for information (RFI).
Language in the FTC’s press release makes it clear that the commission’s priors are to be skeptical of (if not downright hostile toward) the institution of franchising. The director of the FTC’s Bureau of Consumer Protection notes that there is “growing concern around unfair and deceptive practices in the franchise industry.” The director of the FTC Office of Policy Planning states that “[i]t’s clear that, at least in some instances, the promise of franchise agreements as engines of economic mobility and gainful employment is not being fully realized.” She adds that “[t]his RFI will begin to unravel how the unequal bargaining power inherent in these contracts is impacting franchisees, workers, and consumers.” The references to “unequal bargaining power” and “workers” once again highlight this FTC’s unfortunate fascination with issues that fall outside the proper scope of its competition and consumer-protection mandates.
The FTC’s press release lists representative questions on which it hopes to receive comments, including specifically:
franchisees’ ability to negotiate the terms of franchise agreements before signing, and the ability of franchisors to unilaterally make changes to the franchise system after franchisees join;
franchisors’ enforcement of non-disparagement, goodwill or similar clauses;
the prevalence and justification for certain contract terms in franchise agreements;
franchisors’ control over the wages and working conditions in franchised entities, other than through the terms of franchise agreements;
payments or other consideration franchisors receive from third parties (e.g., suppliers, vendors) related to franchisees’ purchases of goods or services from those third parties;
indirect effects on franchisee labor costs related to franchisor business practices; and
the pervasiveness and rationale for franchisors marketing their franchises using languages other than English.
This litany by implication casts franchisors in a negative light, and suggests a potential FTC interest in micromanaging the terms of franchise contractual agreements. Presumably, this would be accomplished through a new proposed rule to be issued after the RFI responses are received. Such “expert” micromanagement reflects a troublesome FTC pretense of regulatory knowledge.
But hold on, the worst is still to come. To top it all off, the press release closes by asking for comments on whether the commission’s highly problematic proposed rule on noncompete agreements should apply to noncompete clauses between franchisors and franchisees.
Barring noncompetes could severely undermine the incentive of franchisors to create new franchising opportunities in the first place, thereby reducing the use of franchising and denying new business opportunities to potential franchisees. Job creation and economic growth prospects would be harmed. As a result, franchise workers, small businesses, and consumers (who enjoy patronizing franchise outlets because of the quality assurance associated with a franchise trademark) would suffer.
The only saving grace is that a final FTC noncompete rule likely would be struck down in court. Before that happened, however, many rationally risk-averse firms would discontinue using welfare-beneficial noncompetes—including in franchising, assuming franchising was covered by the final rule.
As it is, FTC law and state-consumer protection law already provide more than ample protection for franchisees in their relationship with franchisors. The FTC’s Franchise Rule requires franchisors to make key disclosures upfront before people make a major investment. What’s more, the FTC Act prohibits material misrepresentations about any business opportunity, including franchises.
Moreover, as the FTC itself admits, franchisees may be able to use state statutes that prohibit unfair or deceptive practices to challenge conduct that violates the Franchise Rule or truth-in-advertising standards.
The FTC should stick with its current consumer-protection approach and ignore the siren song of micromanaging (and, indeed, discouraging) franchisor-franchisee relationships. If it is truly concerned about the economic welfare of consumers and producers, it should immediately withdraw the RFI.
The 117th Congress closed out without a floor vote on either of the major pieces of antitrust legislation introduced in both chambers: the American Innovation and Choice Online Act (AICOA) and the Open Apps Market Act (OAMA). But it was evident at yesterday’s hearing of the Senate Judiciary Committee’s antitrust subcommittee that at least some advocates—both in academia and among the committee leadership—hope to raise those bills from the dead.
Of the committee’s five carefully chosen witnesses, only New York University School of Law’s Daniel Francis appeared to appreciate the competitive risks posed by AICOA and OAMA—noting, among other things, that the bills’ failure to distinguish between harm to competition and harm to certain competitors was a critical defect.
Yale School of Management’s Fiona Scott Morton acknowledged that ideal antitrust reforms were not on the table, and appeared open to amendments. But she also suggested that current antitrust standards were deficient and, without much explanation or attention to the bills’ particulars, that AICOA and OAMA were both steps in the right direction.
Subcommittee Chair Amy Klobuchar (D-Minn.), who sponsored AICOA in the last Congress, seems keen to reintroduce it without modification. In her introductory remarks, she lamented the power, wealth (if that’s different), and influence of Big Tech in helping to sink her bill last year.
Apparently, firms targeted by anticompetitive legislation would rather they weren’t. Folks outside the Beltway should sit down for this: it seems those firms hire people to help them explain, to Congress and the public, both the fact that they don’t like the bills and why. The people they hire are called “lobbyists.” It appears that, sometimes, that strategy works or is at least an input into a process that sometimes ends, more or less, as they prefer. Dirty pool, indeed.
There are, of course, other reasons why AICOA and OAMA might have stalled. Had they been enacted, it’s very likely that they would have chilled innovation, harmed consumers, and provided a level of regulatory discretion that would have been very hard, if not impossible, to dial back. If reintroduced and enacted, the bills would be more likely to “rein in” competition and innovation in the American digital sector and, specifically, targeted tech firms’ ability to deliver innovative products and services to tens of millions of (hitherto very satisfied) consumers.
Our colleagues at the International Center for Law & Economics (ICLE) and its affiliated scholars, among others, have explained why. For a selected bit of self-plagiarism, AICOA and OAMA received considerable attention in our symposium on Antitrust’s Uncertain Future; ICLE’s Dirk Auer had a Truth on the Market post on AICOA; and Lazar Radic wrote a piece on OAMA that’s currently up for a Concurrences award.
To revisit just a few critical points:
AICOA and OAMA both suppose that “self-preferencing” is generally harmful. Not so. A firm might invest in developing a successful platform and ecosystem because it expects to recoup some of that investment through, among other means, preferred treatment for some of its own products. Exercising a measure of control over downstream or adjacent products might drive the platform’s development in the first place (see here and here for some potential advantages). To cite just a few examples from the empirical literature, Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand, not just for Instagram, but for the entire category of photography apps; Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally; and Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expanded the consoles’ installed base. As a result, they increase the potential for independent game developers, even in the face of competition from first-party games.
AICOA and OAMA, in somewhat different ways, favor open systems, interoperability, and/or data portability. All of these have potential advantages but, equally, potential costs or disadvantages. Whether any is procompetitive or anticompetitive depends on particular facts and circumstances. In the abstract, each represents a business model that might well be procompetitive or benign, and that consumers might well favor or disfavor. For example, interoperability has potential benefits and costs, and, as Sam Bowman has observed, those costs sometimes exceed the benefits. For instance, interoperability can be exceedingly costly to implement or maintain, and it can generate vulnerabilities that challenge or undermine data security. Data portability can be handy, but it can also harm the interests of third parties—say, friends willing to be named, or depicted in certain photos on a certain platform, but not just anywhere. And while recent commentary suggests that the absence of “open” systems signals a competition problem, it’s hard to understand why. There are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.
AICOA and OAMA both embody dubious assumptions. For example, underlying AICOA is a supposition that vertical integration is generally (or at least typically) harmful. Critics of established antitrust law can point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. And it is, in fact, possible for vertical mergers or other vertical conduct to harm competition. But that possibility, and the findings of these few studies, are routinely overstated. The weight of the empirical evidence shows that vertical integration tends to be competitively benign. For example, widely acclaimed meta-analysis by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade led them to conclude:
“[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. . . . We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.”
Network effects and data advantages are not insurmountable, nor even necessarily harmful. Advantages of scope and scale for data sets vary according to the data at issue; the context and analytic sophistication of those with access to the data and application; and are subject to diminishing returns, in any case. Simple measures of market share or other numerical thresholds may signal very little of competitive import. See, e.g., this on the contestable platform paradox; Carl Shapiro on the putative decline of competition and irrelevance of certain metrics; and, more generally, antitrust’s well-grounded and wholesale repudiation of the Structure-Conduct-Performance paradigm.
These points are not new. As we note above, they’ve been made more carefully, and in more detail, before. What’s new is that the failure of AICOA and OAMA to reach floor votes in the last Congress leaves their sponsors, and many of their advocates, unchastened.
Conclusion
At yesterday’s hearing, Sen. Klobuchar noted that nations around the world are adopting regulatory frameworks aimed at “reining in” American digital platforms. True enough, but that’s exactly what AICOA and OAMA promise; they will not foster competition or competitiveness.
Novel industries may pose novel challenges, not least to antitrust. But it does not follow that the EU’s Digital Markets Act (DMA), proposed policies in Australia and the United Kingdom, or AICOA and OAMA represent beneficial, much less optimal, policy reforms. As Francis noted, the central commitments of OAMA and AICOA, like the DMA and other proposals, aim to help certain firms at the expense of other firms and consumers. This is not procompetitive reform; it is rent-seeking by less-successful competitors.
AICOA and OAMA were laid to rest with the 117th Congress. They should be left to rest in peace.
The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.
We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.
During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.
The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.
Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.
The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.
Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.
Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.
The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?
In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.
Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.
But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).
From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.
The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek
Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.
The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.
The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:
In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.
Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:
…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]
In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.
Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?
In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.
Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.
With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site.
But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.
Applying the Law & Economics of Collateral Liability to Twitter in Taamneh
Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.
Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.
At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.
Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness.
The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.
This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.
Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.
And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.
Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:
…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.
In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:
…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.
Conclusion
If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.
And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.
In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.
Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.
[This is a guest post from Mario Zúñiga of EY Law in Lima, Perú. An earlier version was published in Spanish on the author’s personal blog. He gives thanks to Hugo Figari and Walter Alvarez for their comments on the initial version and special thanks to Lazar Radic for his advice and editing of the English version.]
There is a line of thinking according to which, without merger-control rules, antitrust law is “incomplete.”[1] Without such a regime, the argument goes, whenever a group of companies faces with the risk of being penalized for cartelizing, they could instead merge and thus “raise prices without any legal consequences.”[2]
A few months ago, at a symposium that INDECOPI[3] organized for the first anniversary the Peruvian Merger Control Act’s enactment,[4] Rubén Maximiano of the OECD’s Competition Division argued in support of the importance of merger-control regimes with the assessment that mergers are“like the ultimate cartel” because a merged firm could raise prices “with impunity.”
I get Maximiano’s point. Antitrust law was born, in part, to counter the rise of trusts, which had been used to evade the restriction that common law already imposed on “restraints of trade” in the United States. Let’s not forget, however, that these “trusts” were essentially a facade used to mask agreements to fix prices, and only to fix prices.[5] They were not real combinations of two or more businesses, as occurs in a merger. Therefore, even if one agree that it is important to scrutinize mergers, describing them as an alternative means of “cartelizing” is, to say the least, incomplete.
While this might seem to some to be a debate about mere semantics, I think is relevant to the broader context in which competition agencies are being pushed from various fronts toward a more aggressive application of merger-control rules.[6]
In describing mergers only as a strategy to gain more market power, or market share, or to expand profit margins, we would miss something very important: how these benefits would be obtained. Let’s not forget what the goal of antitrust law actually is. However we articulate this goal (“consumer welfare” or “the competitive process”), it is clear that antitrust law is more concerned with protecting a process than achieving any particular final result. It protects a dynamic in which, in principle, the market is trusted to be the best way to allocate resources.
In that vein, competition policy seeks to remove barriers to this dynamic, not to force a specific result. In this sense, it is not just what companies achieve in the market that matters, but how they achieve it. And there’s an enormous difference between price-fixing and buying a company. That’s why antitrust law gives a different treatment to “naked” agreements to collude while also contemplating an “ancillary agreements” doctrine.
By accepting this (“ultimate cartel”) approach to mergers, we would also be ignoring decades of economics and management literature. We would be ignoring, to start, the fundamental contributions of Ronald Coase in “The Nature of the Firm.” Acquiring other companies (or business lines or assets) allows us to reduce transaction costs and generate economies of scale in production. According to Coase:
The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of ‘organising’ production through the price mechanism is that of discovering what the relevant prices are. This cost may be reduced but it will not be eliminated by the emergence of specialists who will sell this information. The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.
The simple answer to that could be to enter into long-term contracts, but Coase notes that that’s not that easy. He explains that:
There are, however, other disadvantages-or costs of using the price mechanism. It may be desired to make a long-term contract for the supply of some article or service. This may be due to the fact that if one contract is made for a longer period, instead of several shorter ones, then certain costs of making each contract will be avoided. Or, owing to the risk attitude of the people concerned, they may prefer to make a long rather than a short-term contract. Now, owing to the difficulty of forecasting, the longer the period of the contract is for the supply of the commodity or service, the less possible, and indeed, the less desirable it is for the person purchasing to specify what the other contracting party is expected to do.
Coase, to be sure, makes this argument mainly with respect to vertical mergers, but I think it may be applicable to horizontal mergers, as well, to the extent that the latter generate “economies of scale.” Moreover, it’s not unusual for many acquisitions that are classified as “horizontal” to also have a “vertical” component (e.g., a consumer-goods company may buy another company in the same line of business because it wants to take advantage of the latter’s distribution network; or a computer manufacturer may buy another computer company because it has an integrated unit that produces microprocessors).
We also should not leave aside the entrepreneurship element, which frequently is ignored in the antitrust literature and in antitrust law and policy. As Israel Kirzner pointed out more than 50 years ago:
An economics that emphasizes equilibrium tends, therefore, to overlook the role of the entrepreneur. His role becomes somehow identified with movements from one equilibrium position to another, with ‘innovations,’ and with dynamic changes, but not with the dynamics of the equilibrating process itself.
Instead of the entrepreneur, the dominant theory of price has dealt with the firm, placing the emphasis heavily on its profit-maximizing aspects. In fact, this emphasis has misled many students of price theory to understand the notion of the entrepreneur as nothing more than the focus of profit-maximizing decision-making within the firm. They have completely overlooked the role of the entrepreneur in exploiting superior awareness of price discrepancies within the economic system.”
Working in mergers and acquisitions, either as an external advisor or in-house counsel, has confirmed the aforementioned for me (anecdotal evidence, to be sure, but with the advantage of allowing very in-depth observations). Firms that take control of other firms are seeking to exploit the comparative advantages they may have over whoever is giving up control. Sometimes a company has (or thinks it has) knowledge or assets (greater knowledge of the market, better sales strategies, a broader distribution network, better access to credit, among many other potential advantages) that allow it to make better use of the seller’s existing assets.
An entrepreneur is successful because he or she sees what others do not see. Beatriz Boza summarizes it well in a section of her book “Empresarios” in which she details the purchase of the Santa Isabel supermarket chain by Intercorp (one of Peru’s biggest conglomerates). The group’s main shareholder, Carlos Rodríguez-Pastor, had already decided to enter the retail business and the opportunity came in 2003 when the Dutch group Ahold put Santa Isabel up for sale. The move was risky for Intercorp, in that Santa Isabel was in debt and operating at a loss. But Rodríguez-Pastor had been studying what was happening similar markets in other countries and knew that having a stake in the supermarket business would allow him to reach more consumer-credit customers, in addition to offering other vertical-integration opportunities. In retrospect, the deal can only be described as a success. In 2014, the company reached 34.1% market share and took in revenues of more than US$1.25 billion, with an EBITDA margin of 6.2%. Rodríguez-Pastor saw the synergies that others did not see, but he also dared to take the risk. As Boza writes:
‘Nobody ever saw the synergies,’ concludes the businessman, reminding the businessmen and executives who warned him that he was going to go bankrupt after the acquisition of Ahold’s assets. ‘Today we have a retail circuit that no one else can have.’
Competition authorities need to recognize these sorts of synergies and efficiencies,[7] and take them into account as compensating effects even where the combination might otherwise represent some risk to competition. That is why the vast majority of proposed mergers are approved by competition authorities around the world.
There is some evidence of companies that were sanctioned in cartel cases later choose to merge,[8] but what this requires is that the competition authorities put more effort into prosecuting those mergers, not that they adopt a much more aggressive approach to reviewing all mergers.
I am not proposing, of course, that we should abolish merger control or even that it should necessarily be “permissive.” Some mergers may indeed represent a genuine risk to competition. But in analyzing them, employing technical analytic techniques and robust evidence, it is important to recognize that entrepreneurs may have countless valid business reasons to carry out a merger—reasons that are often not fully formalized or even understood by the entrepreneurs themselves, since they operate under a high degree of uncertainty and risk.[9] An entrepreneur’s primary motivation is to maximize his or her own benefit, but we cannot just assume that this will be greater after “concentrating” markets.[10]
Competition agencies must recognize this, and not simply presume anticompetitive intentions or impacts. Antitrust law—and, in particular, the concentration-control regimes throughout the world—require that any harm to competition must be proved, and this is so precisely because mergers are not like cartels.
[1] The debate prior to the enactment of Peru’s Merger Control Act became too politicized and polarized. Opponents went so far as to affirm that merger control was “unconstitutional” (highly debatable) or that it constituted an interventionist policy (something that I believe cannot be assumed but is contingent on the type of regulation that is approved or how it is applied). On the other hand, advocates of the regulation claimed an inevitable scenario of concentrated markets and monopolies if the act was not approved (without any empirical evidence of this claim). My personal position was initially skeptical, considering that the priority—from a competition policy point of view, at least in a developing economy like Peru—should continue to be deregulation to remove entry barriers and to prosecute cartels. That being said, a well-designed and well-enforced merger-control regime (i.e., one that generally does not block mergers that are not harmful to competition; is agile; and has adequate protection from political interference) does not have to be detrimental to markets and can generate benefits in terms of avoiding anti-competitive mergers.
In Peru, the Commission for the Defense of Free Competition and its Technical Secretariat have been applying the law pretty reasonably. To date, of more than 20 applications, the vast majority have been approved without conditions, and one conditionally. In addition, approval requests have been resolved in an average of 23 days, below the legal term.
[2] See, e.g., this peer-reviewed 2018 OECD report: “The adoption of a merger control regime should be a priority for Peru, since in its absence competitors can circumvent the prohibition against anticompetitive agreements by merging – with effects potentially similar to those of a cartel immune from antitrust scrutiny.”
[3] National Institute for the Defense of Competition and the Protection of Intellectual Property (INDECOPI, after its Spanish acronym), is the Peruvian competition agency. It is an administrative agency with a broad scope of tasks, including antitrust law, unfair competition law, consumer protection, and intellectual property registration, among others. It can adjudicate cases and impose fines. Its decisions can be challenged before courts.
[4] You can watch the whole symposium (which I recommend)here.
[5] See Gregory J. Werden’s “The Foundations of Antitrust.” Werden explains how the term “trust” had lost its original legal meaning and designated all kinds of agreements intended to restrict competition.
[7] See, e.g., the “Efficiencies” section of the U.S. Justice Department and Federal Trade Commission’s Horizontal Merger Guidelines, which are currently under review.
It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.
But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?
Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.
Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.
Section 230 Likely Does Not Apply to Information Developed by an LLM
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”
The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”
And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”
Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.
Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.
There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:
Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.
On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.”
The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)
The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:
Proving Defamation by AI
In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.
Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.
As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”
Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”
The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.
It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:
When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:
While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.
Conclusion
While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.
Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.
Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.
The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.
In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Timesreports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.
One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:
That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.
It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.
“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”
This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.
Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?
It’s not a simple question to answer.
In comments to the FCC, the International Center for Law & Economics (ICLE) notes:
The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.
This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:
A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.
To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.
In a Feb. 14 column in the Wall Street Journal, Commissioner Christine Wilson announced her intent to resign her position on the Federal Trade Commission (FTC).For those curious to know why, she beat you to the punch in the title and subtitle of her column: “Why I’m Resigning as an FTC Commissioner: Lina Khan’s disregard for the rule of law and due process make it impossible for me to continue serving.”
This is the seventh FTC roundup I’ve posted to Truth on the Market since joining the International Center for Law & Economics (ICLE) last September, having left the FTC at the end of August. Relentlessly astute readers of this column may have observed that I cited (and linked to) Commissioner Wilson’s dissents in five of my six previous efforts—actually, to three of them in my Nov. 4 post alone.
As anyone might guess, I’ve linked to Wilson’s dissents (and concurrences, etc.) for the same reason I’ve linked to other sources: I found them instructive in some significant regard. Priors and particular conclusions of law aside, I generally found Wilson’s statements to be well-grounded in established principles of antitrust law and economics. I cannot say the same about statements from the current majority.
Commission dissents are not merely the bases for blog posts or venues for venting. They can provide a valuable window into agency matters for lawmakers and, especially, for the courts. And I would suggest that they serve an important institutional role at the FTC, whatever one thinks of the merits of any specific matter. There’s really no point to having a five-member commission if all its votes are unanimous and all its opinions uniform. Moreover, establishing the realistic possibility of dissent can lend credence to those commission opinions that are unanimous. And even in these fractious times, there are such opinions.
Wilson did not spring forth fully formed from the forehead of the U.S. Senate. She began her FTC career as a Georgetown student, serving as a law clerk in the Bureau of Competition; she returned some years later to serve as chief of staff to Chairman Tim Muris; and she returned again when confirmed as a commissioner in April 2018 (later sworn in in September 2018). In between stints at the FTC, she gained antitrust experience in private practice, both in law firms and as in-house counsel. I would suggest that her agency experience, combined with her work in the private sector, provided a firm foundation for the judgments required of a commissioner.
Daniel Kaufman, former acting director of the FTC’s Bureau of Consumer Protection, reflected on Wilson’s departure here. Personally, with apologies for the platitude, I would like to thank Commissioner Wilson for her service. And, not incidentally, for her consistent support for agency staff.
Her three Democratic colleagues on the commission also thanked her for her service, if only collectively, and tersely: “While we often disagreed with Commissioner Wilson, we respect her devotion to her beliefs and are grateful for her public service. We wish her well in her next endeavor.” That was that. No doubt heartfelt. Wilson’s departure column was a stern rebuke to the Commission, so there’s that. But then, stern rebukes fly in all directions nowadays.
While I’ve never been a commissioner, I recall a far nicer and more collegial sendoff when I departed from my lowly staff position. Come to think of it, I had a nicer sendoff when I left a large D.C. law firm as a third-year associate bound for a teaching position, way back when.
So, what else is new?
In January, I noted that “the big news at the FTC is all about noncompetes”; that is, about the FTC’s proposed rule to ban the use of noncompetes more-or-less across the board The rule would cover all occupations and all income levels, with a narrow exception for the sale of the business in which the “employee” has at least a 25% ownership stake (why 25%?), and a brief nod to statutory limits on the commission’s regulatory authority with regard to nonprofits, common carriers, and some other entities.
Colleagues Brian Albrecht (and here),Alden Abbott, Gus Hurwitz, and Corbin K. Barthold also have had things to say about it. I suggested that there were legitimate reasons to be concerned about noncompetes in certain contexts—sometimes on antitrust grounds, and sometimes for other reasons. But certain contexts are far from all contexts, and a mixed and developing body of economic literature, coupled with limited FTC experience in the subject, did not militate in favor of nearly so sweeping a regulatory proposal. This is true even before we ask practical questions about staffing for enforcement or, say, whether the FTC Act conferred the requisite jurisdiction on the agency.
This is the first or second FTC competition rulemaking ever, depending on how one counts, and it is the first this century, in any case. Here’s administrative scholar Thomas Merrill on FTC competition rulemaking. Given the Supreme Court’s recent articulation of the major questions doctrine in West Virginia v. EPA, a more modest and bipartisan proposal might have been far more prudent. A bad turn at the court can lose more than the matter at hand. Comments are due March 20, by the way.
Now comes a missive from the House Judiciary Committee, along with multiple subcommittees, about the noncompete NPRM. The letter opens by stating that “The Proposed Rule exceeds its delegated authority and imposes a top-down one-size-fits-all approach that violates basic American principles of federalism and free markets.” And “[t]he Biden FTC’s proposed rule on non-compete clauses shows the radicalness of the so-called ‘hipster’ antitrust movement that values progressive outcomes over long-held legal and economic principles.”
Ouch. Other than that Mr. Jordan, how did you like the play?
There are several single-spaced pages on the “FTC’s power grab” before the letter gets to a specific, and substantial, formal document request in the service of congressional oversight. That does not stop the rulemaking process, but it does not bode well either.
Part of why this matters is that there’s still solid, empirically grounded, pro-consumer work that’s at risk. In my first Truth on the Market post, I applauded FTC staff commentsurging New York State to reject a certificate of public advantage (COPA) application. As I noted there, COPAs are rent-seeking mechanisms chiefly aimed at insulating anticompetitive mergers (and sometimes conduct) from federal antitrust scrutiny. Commission and staff opposition to COPAs was developed across several administrations on well-established competition principles and a significant body of research regarding hospital consolidation, health care prices, and quality of care.
Office of Policy Planning (OPP) Director Elizabeth Wilkins has now announced that the parties in question have abandoned their proposed merger. Wilkins thanks the staff of OPP, the Bureau of Economics, and the Bureau of Competition for their work on the matter, and rightly so. There’s no new-fangled notion of Section 5 or mergers at play. The work has developed over decades and it’s the sort of work that should continue. Notwithstanding numerous (if not legion) departures, good and experienced staff and established methods remain, and ought not to be repudiated, much less put at risk.
I won’t recapitulate the much-discussed case, but on the somewhat-less-discussed matter of the withdrawal, I’ll consider why the FTC announced that the matter “is withdrawn from adjudication, and that all proceedings before the Administrative Law Judge be and they hereby are stayed.” While the matter was not litigated to its conclusion in federal court, the substantial and workmanlike opinion denying the preliminary injunction made it clear that the FTC had lost on the facts under both of the theories of harm to potential competition that they’d advanced.
“Having reviewed and considered the objective evidence of Meta’s capabilities and incentives, the Court is not persuaded that this evidence establishes that it was ‘reasonably probable’ Meta would enter the relevant market.”
An appeal in the 9th U.S. Circuit Court of Appeals likely seemed fruitless. Stopping short of a final judgment, the FTC could have tried for a do-over in its internal administrative Part 3 process, and might have fared well before itself, but that would have demanded considerable additional resources in a case that, in the long run, was bound to be a loser. Bloomberg had previously reported that the commission voted to proceed with the case against the merger contra the staff’s recommendation. Here, the commission noted that “Complaint Counsel [the Commission’s own staff] has not registered any objection” to Meta’s motion to withdraw proceedings from adjudication.
There are novel approaches to antitrust. And there are the courts and the law. And, as noted above, many among the staff are well-versed in that law and experienced at investigations. You can’t always get what you want, but if you try sometimes, you get what you deserve.
Economists have long recognized that innovation is key to economic growth and vibrant competition. As an Organisation for Economic Co-operation and Development (OECD) report on innovation and growth explains, “innovative activity is the main driver of economic progress and well-being as well as a potential factor in meeting global challenges in domains such as the environment and health. . . . [I]nnovation performance is a crucial determinant of competitiveness and national progress.”
It follows that an economically rational antitrust policy should be highly attentive to innovation concerns. In a December 2020 OECD paper, David Teece and Nicolas Petit caution that antitrust today is “missing broad spectrum competition that delivers innovation, which in turn is the main driver of long term growth in capitalist economies.” Thus, the authors stress that “[i]t is about time to put substance behind economists’ and lawyers’ long time admonition to inject more dynamism in our analysis of competition. An antitrust renaissance, not a revolution, is long overdue.”
Accordingly, before the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) finalize their new draft merger guidelines, they would be well-advised to take heed of new research that “there is an important connection between merger activity and innovation.” This connection is described in a provocative new NERA Economic Consulting paper by Robert Kulick and Andrew Card titled “Mergers, Industries, and Innovation: Evidence from R&D Expenditures and Patent Applications.” As the executive summary explains (citation deleted):
For decades, there has been a broad consensus among policymakers, antitrust enforcers, and economists that most mergers pose little threat from an antitrust perspective and that mergers are generally procompetitive. However, over the past year, leadership at the FTC and DOJ has questioned whether mergers are, as a general matter, economically beneficial and asserted that mergers pose an active threat to innovation. The Agencies have also set the stage for a substantial increase in the scope of merger enforcement by focusing on new theories of anticompetitive harm such as elimination of potential competition from nascent competitors and the potential for cumulative anticompetitive harm from serial acquisitions. Despite the importance of the question of whether mergers have a positive or negative effect on industry-level innovation, there is very little empirical research on the subject. Thus, in this study, we investigate this question utilizing, what is to our knowledge, a never before used dataset combining industry-level merger data from the FTC/DOJ annual HSR reports with industry-level data from the NSF on R&D expenditure and patent applications. We find a strong positive and statistically significant relationship between merger activity and industry-level innovative activity. Over a three- to four-year cycle, a given merger is associated with an average increase in industry-level R&D expenditure of between $299 million and $436 million in R&D intensive industries. Extrapolating our results to the industry level implies that, on average, mergers are associated with an increase in R&D expenditure of between $9.27 billion and $13.52 billion per year in R&D intensive industries and an increase of between 1,430 and 3,035 utility patent applications per year. Furthermore, using a statistical technique developed by Nobel Laureate Clive Granger, we find that the direction of causality goes, to a substantial extent, directly from merger activity to increased R&D expenditure and patent applications. Based on these findings we draw the following key conclusions:
There is no evidence that mergers are generally associated with reduced innovation, nor do the results indicate that supposedly lax antitrust enforcement over the period from 2008 to 2020 diminished innovative activity. Indeed, R&D expenditure and patent applications increased substantially over the period studied, and this increase was directly linked to increases in merger activity.
In previous research, we found that “trends in industrial concentration do not provide a reliable basis for making inferences about the competitive effects of a proposed merger” as “trends in concentration may simply reflect temporary fluctuations which have no broader economic significance” or are “often a sign of increasing rather than decreasing market competition.” This study presents further evidence that previous consolidation in an industry or a “trend toward concentration” may reflect procompetitive responses to competitive pressures, and therefore should not play a role in merger review beyond that already embodied in the market-level concentration screens considered by the Agencies.
The Agencies should proceed cautiously in pursuing novel theories of anticompetitive harm; our findings are consistent with the prevailing consensus from the previous decades that there is an important connection between merger activity and innovation, and thus, a broad “anti-merger” policy, particularly one pursued in the absence of strong empirical evidence, has the potential to do serious harm by perversely inhibiting innovative activity.
Due to the link between mergers and innovative activity in R&D intensive industries where the potential for anticompetitive consequences can be resolved through remedies, relying on remedies rather than blocking transactions outright may encourage innovation while protecting consumers where there are legitimate competitive concerns about a particular transaction.
The potential for mergers to create procompetitive benefits should be taken seriously by policymakers, antitrust enforcers, courts, and academics and the Agencies should actively study the potential benefits, in addition to the costs, of mergers.
In short, the Kulick & Card paper lends valuable empirical support for an economics-based approach to merger analysis that fully takes into account innovation concerns. If the FTC and DOJ truly care about strengthening the American economy (consistent with “President Biden’s stated goals of renewing U.S. innovation and global competitiveness”—see, e.g., here and here), they should take heed in crafting new merger guidelines. An emphasis in the guidelines on avoiding interference with merger-related innovation (taking into account research by such scholars as Kulick, Card, Teece, and Petit) would demonstrate that the antitrust agencies are fully behind President Joe Biden’s plans to promote an innovative economy.
The Federal Trade Commission (FTC) announced in a notice of proposed rulemaking (NPRM) last month that it intends to ban most noncompete agreements. Is that a good idea? As a matter of policy, the question is debatable. So far as the NPRM is concerned, however, that debate is largely hypothetical. It is unlikely that any rule the FTC issues will ever take effect.
Several formidable legal obstacles stand in the way. The FTC seeks to stand its rule on the authority of Section 5 of the FTC Act, which bars “unfair methods of competition” in commerce. But Section 5 says nothing about rulemaking, as opposed to case-by-case prosecution.
There is a rulemaking provision in Section 6, but for reasons explained elsewhere, it only empowers the FTC to set out its own internal procedures. And if the FTC could craft binding substantive rules—such as a ban on noncompete agreements—that would violate the U.S. Constitution. It would transfer lawmaking power from Congress to an administrative agency, in violation of Article I.
What’s more, the U.S. Supreme Court recently confirmed the existence of a “major questions doctrine,” under which an agency attempting to “make major policy decisions itself” must “point to clear congressional authorization for the power it claims.” The FTC’s proposed rule would sweep aside tens of millions of noncompete clauses; it would very likely alter salaries to the tune of hundreds of billions of dollars a year; and it would preempt dozens of state laws. That’s some “major” policymaking. Nothing in the FTC Act “clear[ly]” authorizes the FTC to undertake it.
But suppose that none of these hurdles existed. Surely, then the FTC would get somewhere—right? In seeking to convince a court to read the statute its way, after all, it could make a bid for Chevron deference. Named for Chevron v. NRDC (1984), that rule (of course) requires a court to defer to an agency’s reasonable construction of a law the agency administers. With the benefit of such judicial obeisance, the FTC would not have to show that noncompete clauses are unlawful under the best reading of Section 5. It could get away with showing merely that they’re unlawful under a plausible reading of Section 5.
But Chevron won’t do the trick.
The Chevron test can be broken down into three phases. A court begins by determining whether the test even applies (often called Chevron “step zero”). If it does, the court next decides whether the statute in question has a clear meaning (Chevron step one). And if it turns out that the statute is unclear—is ambiguous—the court proceeds to ask whether the agency’s interpretation of the statute is reasonable, and if it is, to yield to it (Chevron step two).
Each of these stages poses a problem for the FTC. Not long ago, the Supreme Court showed why this is so. True, Kisor v. Wilkie (2019) is not about Chevron deference. Not directly. But the decision upholds a cognate doctrine, Auer deference (named for Auer v. Robbins(1997)), under which a court typically defers to an agency’s understanding of its own regulations. Kisor leans heavily, in its analysis, both on Chevron itself and on later opinions about the Chevron test, such as United States v. MeadCorp. (2001) and City of Arlington v. FCC (2013). So it is hardly surprising that Kisor makes several points that are salient here.
Start with what Kisor says about when Chevron comes into play at all. Chevron and Auer stand, Kisor reminds us, on a presumption that Congress generally wants expert agencies, not generalist courts, to make the policy judgments needed to fill in the details of a statutory scheme. It follows, Kisor remarks, that if an “agency’s interpretation” does not “in some way implicate its substantive expertise,” there’s no reason to defer to it.
When is an agency not wielding its “substantive expertise”? One example Kisor offers is when the disputed statutory language is derived from the common law. Parsing common-law terms, Kisor notes, “fall[s] more naturally into a judge’s bailiwick.”
This is bad news for the FTC. Think about it. When it put the words “unfair methods of competition” in Section 5, could Congress have meant “unfair” in the cosmic sense? Could it have intended to grant a bunch of unelected administrators a roving power to “do justice”? Of course not. No, the phrase “unfair methods of competition” descends from the narrow, technical, humdrum common-law concept of “unfair competition.”
The FTC has no special insight into what the term “unfair competition” meant at common law. Figuring that out is judges’ work. That Congress fiddled with things a little does not change this conclusion. Adding the words “methods of” does not rip the words “unfair competition” from their common-law roots and launch them into a semantic void.
It remains the case—as Justice Felix Frankfurter put it—that when “a word is obviously transplanted” from the common law, it “brings the old soil with it.” And an agency, Kisor confirms, “has no comparative expertise” at digging around in that particular dirt.
The FTC lacks expertise not only in understanding the common law, but even in understanding noncompete agreements. Dissenting from the issuance of the NPRM, (soon to be former) Commissioner Christine S. Wilson observed that the agency has no experience prosecuting employee noncompete clauses under Section 5.
So the FTC cannot get past Chevron step zero. Nor, if it somehow crawled its way there, could the agency satisfy Chevron step one. Chevron directs a court examining a text for a clear meaning to employ the “traditional tools” of construction. Kisor stresses that a court must exhaust those tools. It must “carefully consider the text, structure, history, and purpose” of the regulation (under Auer) or statute (under Chevron). “Doing so,” Kisor assures us, “will resolve many seeming ambiguities.”
The text, structure, history, and purpose of Section 5 make clear that noncompete agreements are not an unfair method of competition. Certainly not as a species. “‘Unfair competition,’ as known to the common law,” the Supreme Court explained in Schechter Poultry v. United States(1935), was “a limited concept.” It was “predicated of acts which lie outside the ordinary course of business and are tainted by fraud, or coercion, or conduct otherwise prohibited by law.” Under the common law, noncompete agreements were generally legal—so we know that they did not constitute “unfair competition.”
And although Section 5 bars “unfair methods of competition,” the altered wording still doesn’t capture conduct that isn’t unfair. The Court has said that the meaning of the phrase is properly “left to judicial determination as controversies arise.” It is to be fleshed out “in particular instances, upon evidence, in the light of particular competitive conditions.” The clear import of these statements is that the FTC may not impose broad prohibitions that sweep in legitimate business conduct.
Yet a blanket ban on noncompete clauses would inevitably erase at least some agreements that are not only not wrongful, but beneficial. “There is evidence,” the FTC itself concedes, “that non-compete clauses increase employee training and other forms of investment.” Under the plain meaning of Section 5, the FTC can’t condemn a practice altogether just because it is sometimes, or even often, unfair. It must, at the very least, do the work of sorting out, “in particular instances,” when the costs outweigh the benefits.
By definition, failure at Chevron step one entails failure at Chevron step two. It is worth noting, though, that even if the FTC reached the final stage, and even if, once there, it convinced a court to disregard the common law and read the word “unfair” in a colloquial sense, it would still not be home free. “Under Chevron,” Kisor states, “the agency’s reading must fall within the bounds of reasonable interpretation.” This requirement is important in light of the “far-reaching influence of agencies and the opportunities such power carries for abuse.”
Even if one assumes (in the teeth of Article I) that Congress could hand an independent agency unfettered authority to stamp out “unfairness” in the economy, that does not mean that Congress, in fact, did so in Section 5. Why did Congress write Section 5 as it did? Largely because it wanted to give the FTC the flexibility to deal with new and unexpected forms of wrongdoing as they arise. As one congressional report concluded, “it is impossible to frame definitions which embrace all unfair practices” in advance. “The purpose of Congress,” wrote Justice Louis Brandeis (who had a hand in drafting the law), was to ensure that the FTC can “prevent” an emergent “unfair method” from taking hold as a “general practice.”
Noncompete agreements are not some startling innovation. They’ve been around—and allowed—for hundreds of years. If Congress simply wanted to ensure that the FTC can nip new threats to competition in the bud, the NPRM is not a proper use of the FTC’s power under Section 5.
In any event, what Congress almost certainly did not intend was to hand the FTC the capacity (as Chair Lina Khan would have it) to “shape[] the distribution of power and opportunity across our economy.” The FTC’s commissioners are not elected, and they cannot be removed (absent misconduct) by the president. They lack the democratic legitimacy or political accountability to restructure the economy.
All the same, nothing about Section 5 suggests that Congress gave the agency such awesome power. What leeway Chevron might give here, common sense takes away. The more the FTC “seeks to break new ground by enjoining otherwise legitimate practices,” a federal court of appeals once declared, “the closer must be our scrutiny upon judicial review.” It falls to the judiciary to ensure that the agency does not “undu[ly] … interfere[]” with “our country’s competitive system.”
We have come full circle. Article I and the “major questions” principle tell us that the FTC cannot use four words in Section 5 of the FTC Act to issue a rule that disrupts contractual relations, tramples federalism, and shifts around many billions of dollars in wealth. And if we march through the Chevron analysis anyway, we find that, even at Chevron step two, the statute still can’t bear the weight. Chevron deference is not a license for the FTC to ignore the separation of powers and micromanage the economy.
Various states recently have enacted legislation that requires authors, publishers, and other copyright holders to license to lending libraries digital texts, including e-books and audio books. These laws violate the Constitution’s conferral on Congress of the exclusive authority to set national copyright law. Furthermore, as a policy matter, they offend free-market principles.
The laws interfere with the right of copyright holders to set the price for the fruit of their intellectual labor. The laws lower incentives for the production of new creative digital works in the future, thereby reducing consumers’ and producers’ surplus. Furthermore, the claim that “unfair” pricing prevents libraries from stocking “sufficient” numbers of e-books to satisfy public demand is belied by the reality that libraries have substantially grown their digital collections in recent years.
Finally, proponents of legislation ignore the fact that libraries actually pay far less than consumers do when they purchase an e-book license for personal use.
In the world of video games, the process by which players train themselves or their characters in order to overcome a difficult “boss battle” is called “leveling up.” I find that the phrase also serves as a useful metaphor in the context of corporate mergers. Here, “leveling up” can be thought of as acquiring another firm in order to enter or reinforce one’s presence in an adjacent market where a larger and more successful incumbent is already active.
In video-game terminology, that incumbent would be the “boss.” Acquiring firms choose to level up when they recognize that building internal capacity to compete with the “boss” is too slow, too expensive, or is simply infeasible. An acquisition thus becomes the only way “to beat the boss” (or, at least, to maximize the odds of doing so).
Alas, this behavior is often mischaracterized as a “killer acquisition” or “reverse killer acquisition.” What separates leveling up from killer acquisitions is that the former serve to turn the merged entity into a more powerful competitor, while the latter attempt to weaken competition. In the case of “reverse killer acquisitions,” the assumption is that the acquiring firm would have entered the adjacent market regardless absent the merger, leaving even more firms competing in that market.
In other words, the distinction ultimately boils down to a simple (though hard to answer) question: could both the acquiring and target firms have effectively competed with the “boss” without a merger?
Because they are ubiquitous in the tech sector, these mergers—sometimes also referred to as acquisitions of nascent competitors—have drawn tremendous attention from antitrust authorities and policymakers. All too often, policymakers fail to adequately consider the realistic counterfactual to a merger and mistake leveling up for a killer acquisition. The most recent high-profile example is Meta’s acquisition of the virtual-reality fitness app Within. But in what may be a hopeful sign of a turning of the tide, a federal court appears set to clear that deal over objections from the Federal Trade Commission (FTC).
Some Recent ‘Boss Battles’
The canonical example of leveling up in tech markets is likely Google’s acquisition of Android back in 2005. While Apple had not yet launched the iPhone, it was already clear by 2005 that mobile would become an important way to access the internet (including Google’s search services). Rumors were swirling that Apple, following its tremendously successful iPod, had started developing a phone, and Microsoft had been working on Windows Mobile for a long time.
In short, there was a serious risk that Google would be reliant on a single mobile gatekeeper (i.e., Apple) if it did not move quickly into mobile. Purchasing Android was seen as the best way to do so. (Indeed, averting an analogous sort of threat appears to be driving Meta’s move into virtual reality today.)
The natural next question is whether Google or Android could have succeeded in the mobile market absent the merger. My guess is that the answer is no. In 2005, Google did not produce any consumer hardware. Quickly and successfully making the leap would have been daunting. As for Android:
Google had significant advantages that helped it to make demands from carriers and OEMs that Android would not have been able to make. In other words, Google was uniquely situated to solve the collective action problem stemming from OEMs’ desire to modify Android according to their own idiosyncratic preferences. It used the appeal of its app bundle as leverage to get OEMs and carriers to commit to support Android devices for longer with OS updates. The popularity of its apps meant that OEMs and carriers would have great difficulty in going it alone without them, and so had to engage in some contractual arrangements with Google to sell Android phones that customers wanted. Google was better resourced than Android likely would have been and may have been able to hold out for better terms with a more recognizable and desirable brand name than a hypothetical Google-less Android. In short, though it is of course possible that Android could have succeeded despite the deal having been blocked, it is also plausible that Android became so successful only because of its combination with Google. (citations omitted)
In short, everything suggests that Google’s purchase of Android was a good example of leveling up. Note that much the same could be said about the company’s decision to purchase Fitbit in order to compete against Apple and its Apple Watch (which quickly dominated the market after its launch in 2015).
A more recent example of leveling up is Microsoft’s planned acquisition of Activision Blizzard. In this case, the merger appears to be about improving Microsoft’s competitive position in the platform market for game consoles, rather than in the adjacent market for games.
At the time of writing, Microsoft is staring down the barrel of a gun: Sony is on the cusp of becoming the runaway winner of yet another console generation. Microsoft’s executives appear to have concluded that this is partly due to a lack of exclusive titles on the Xbox platform. Hence, they are seeking to purchase Activision Blizzard, one of the most successful game studios, known among other things for its acclaimed Call of Duty series.
Again, the question is whether Microsoft could challenge Sony by improving its internal game-publishing branch (known as Xbox Game Studios) or whether it needs to acquire a whole new division. This is obviously a hard question to answer, but a cursory glance at the titles shipped by Microsoft’s publishing studio suggest that the issues it faces could not simply be resolved by throwing more money at its existing capacities. Indeed, Microsoft Game Studios seems to be plagued by organizational failings that might only be solved by creating more competition within the Microsoft company. As one gaming journalist summarized:
The current predicament of these titles goes beyond the amount of money invested or the buzzwords used to market them – it’s about Microsoft’s plan to effectively manage its studios. Encouraging independence isn’t an excuse for such a blatantly hands-off approach which allows titles to fester for years in development hell, with some fostering mistreatment to occur. On the surface, it’s just baffling how a company that’s been ranked as one of the top 10 most reputable companies eight times in 11 years (as per RepTrak) could have such problems with its gaming division.
The upshot is that Microsoft appears to have recognized that its own game-development branch is failing, and that acquiring a well-functioning rival is the only way to rapidly compete with Sony. There is thus a strong case to be made that competition authorities and courts should approach the merger with caution, as it has at least the potential to significantly increase competition in the game-console industry.
Finally, leveling up is sometimes a way for smaller firms to try and move faster than incumbents into a burgeoning and promising segment. The best example of this is arguably Meta’s effort to acquire Within, a developer of VR fitness apps. Rather than being an attempt to thwart competition from a competitor in the VR app market, the goal of the merger appears to be to compete with the likes of Google, Apple, and Sony at the platform level. As Mark Zuckerberg wrote back in 2015, when Meta’s VR/AR strategy was still in its infancy:
Our vision is that VR/AR will be the next major computing platform after mobile in about 10 years… The strategic goal is clearest. We are vulnerable on mobile to Google and Apple because they make major mobile platforms. We would like a stronger strategic position in the next wave of computing….
Over the next few years, we’re going to need to make major new investments in apps, platform services, development / graphics and AR. Some of these will be acquisitions and some can be built in house. If we try to build them all in house from scratch, then we risk that several will take too long or fail and put our overall strategy at serious risk. To derisk this, we should acquire some of these pieces from leading companies.
In short, many of the tech mergers that critics portray as killer acquisitions are just as likely to be attempts by firms to compete head-on with incumbents. This “leveling up” is precisely the sort of beneficial outcome that antitrust laws were designed to promote.
Building Products Is Hard
Critics are often quick to apply the “killer acquisition” label to any merger where a large platform is seeking to enter or reinforce its presence in an adjacent market. The preceding paragraphs demonstrate that it’s not that simple, as these mergers often enable firms to improve their competitive position in the adjacent market. For obvious reasons, antitrust authorities and policymakers should be careful not to thwart this competition.
The harder part is how to separate the wheat from the chaff. While I don’t have a definitive answer, an easy first step would be for authorities to more seriously consider the supply side of the equation.
Building a new product is incredibly hard, even for the most successful tech firms. Microsoft famously failed with its Zune music player and Windows Phone. The Google+ social network never gained any traction. Meta’s foray into the cryptocurrency industry was a sobering experience. Amazon’s Fire Phone bombed. Even Apple, which usually epitomizes Silicon Valley firms’ ability to enter new markets, has had its share of dramatic failures: Apple Maps, its Ping social network, and the first Home Pod, to name a few.
To put it differently, policymakers should not assume that internal growth is always a realistic alternative to a merger. Instead, they should carefully examine whether such a strategy is timely, cost-effective, and likely to succeed.
This is obviously a daunting task. Firms will struggle to dispositively show that they need to acquire the target firm in order to effectively compete against an incumbent. The question essentially hinges on the quality of the firm’s existing management, engineers, and capabilities. All of these are difficult—perhaps even impossible—to measure. At the very least, policymakers can improve the odds of reaching a correct decision by approaching these mergers with an open mind.
Under Chair Lina Khan’s tenure, the FTC has opted for the opposite approach and taken a decidedly hostile view of tech acquisitions. The commission sued to block both Meta’s purchase of Within and Microsoft’s acquisition of Activision Blizzard. Likewise, several economists—notably Tommasso Valletti—have called for policymakers to reverse the burden of proof in merger proceedings, and opined that all mergers should be viewed with suspicion because, absent efficiencies, they always reduce competition.
Unfortunately, this skeptical approach is something of a self-fulfilling prophecy: when authorities view mergers with suspicion, they are likely to be dismissive of the benefits discussed above. Mergers will be blocked and entry into adjacent markets will occur via internal growth.
Large tech companies’ many failed attempts to enter adjacent markets via internal growth suggest that such an outcome would ultimately harm the digital economy. Too many “boss battles” will needlessly be lost, depriving consumers of precious competition and destroying startup companies’ exit strategies.