Archives For Twitter

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Earlier this month, Professors Fiona Scott Morton, Steve Salop, and David Dinielli penned a letter expressing their “strong support” for the proposed American Innovation and Choice Online Act (AICOA). In the letter, the professors address criticisms of AICOA and urge its approval, despite possible imperfections.

“Perhaps this bill could be made better if we lived in a perfect world,” the professors write, “[b]ut we believe the perfect should not be the enemy of the good, especially when change is so urgently needed.”

The problem is that the professors and other supporters of AICOA have shown neither that “change is so urgently needed” nor that the proposed law is, in fact, “good.”

Is Change ‘Urgently Needed’?

With respect to the purported urgency that warrants passage of a concededly imperfect bill, the letter authors assert two points. First, they claim that AICOA’s targets—Google, Apple, Facebook, Amazon, and Microsoft (collectively, GAFAM)—“serve as the essential gatekeepers of economic, social, and political activity on the internet.” It is thus appropriate, they say, to amend the antitrust laws to do something they have never before done: saddle a handful of identified firms with special regulatory duties.

But is this oft-repeated claim about “gatekeeper” status true? The label conjures up the old Terminal Railroad case, where a group of firms controlled the only bridges over the Mississippi River at St. Louis. Freighters had no choice but to utilize their services. Do the GAFAM firms really play a similar role with respect to “economic, social, and political activity on the internet”? Hardly.

With respect to economic activity, Amazon may be a huge player, but it still accounts for only 39.5% of U.S. ecommerce sales—and far less of retail sales overall. Consumers have gobs of other ecommerce options, and so do third-party merchants, which may sell their wares using Shopify, Ebay, Walmart, Etsy, numerous other ecommerce platforms, or their own websites.

For social activity on the internet, consumers need not rely on Facebook and Instagram. They can connect with others via Snapchat, Reddit, Pinterest, TikTok, Twitter, and scores of other sites. To be sure, all these services have different niches, but the letter authors’ claim that the GAFAM firms are “essential gatekeepers” of “social… activity on the internet” is spurious.

Nor are the firms singled out by AICOA essential gatekeepers of “political activity on the internet.” The proposed law touches neither Twitter, the primary hub of political activity on the internet, nor TikTok, which is increasingly used for political messaging.

The second argument the letter authors assert in support of their claim of urgency is that “[t]he decline of antitrust enforcement in the U.S. is well known, pervasive, and has left our jurisprudence unable to protect and maintain competitive markets.” In other words, contemporary antitrust standards are anemic and have led to a lack of market competition in the United States.

The evidence for this claim, which is increasingly parroted in the press and among the punditry, is weak. Proponents primarily point to studies showing:

  1. increasing industrial concentration;
  2. higher markups on goods and services since 1980;
  3. a declining share of surplus going to labor, which could indicate monopsony power in labor markets; and
  4. a reduction in startup activity, suggesting diminished innovation. 

Examined closely, however, those studies fail to establish a domestic market power crisis.

Industrial concentration has little to do with market power in actual markets. Indeed, research suggests that, while industries may be consolidating at the national level, competition at the market (local) level is increasing, as more efficient national firms open more competitive outlets in local markets. As Geoff Manne sums up this research:

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

With respect to the evidence on markups, the claim of a significant increase in the price-cost margin depends crucially on the measure of cost. The studies suggesting an increase in margins since 1980 use the “cost of goods sold” (COGS) metric, which excludes a firm’s management and marketing costs—both of which have become an increasingly significant portion of firms’ costs. Measuring costs using the “operating expenses” (OPEX) metric, which includes management and marketing costs, reveals that public-company markups increased only modestly since the 1980s and that the increase was within historical variation. (It is also likely that increased markups since 1980 reflect firms’ more extensive use of technology and their greater regulatory burdens, both of which raise fixed costs and require higher markups over marginal cost.)

As for the declining labor share, that dynamic is occurring globally. Indeed, the decline in the labor share in the United States has been less severe than in Japan, Canada, Italy, France, Germany, China, Mexico, and Poland, suggesting that anemic U.S. antitrust enforcement is not to blame. (A reduction in the relative productivity of labor is a more likely culprit.)

Finally, the claim of reduced startup activity is unfounded. In its report on competition in digital markets, the U.S. House Judiciary Committee asserted that, since the advent of the major digital platforms:

  1. “[t]he number of new technology firms in the digital economy has declined”;
  2. “the entrepreneurship rate—the share of startups and young firms in the [high technology] industry as a whole—has also fallen significantly”; and
  3. “[u]nsurprisingly, there has also been a sharp reduction in early-stage funding for technology startups.” (pp. 46-47.)

Those claims, however, are based on cherry-picked evidence.

In support of the first two, the Judiciary Committee report cited a study based on data ending in 2011. As Benedict Evans has observed, “standard industry data shows that startup investment rounds have actually risen at least 4x since then.”

In support of the third claim, the report cited statistics from an article noting that the number and aggregate size of the very smallest venture capital deals—those under $1 million—fell between 2014 and 2018 (after growing substantially from 2008 to 2014). The Judiciary Committee report failed to note, however, the cited article’s observation that small venture deals ($1 million to $5 million) had not dropped and that larger venture deals (greater than $5 million) had grown substantially during the same time period. Nor did the report acknowledge that venture-capital funding has continued to increase since 2018.

Finally, there is also reason to think that AICOA’s passage would harm, not help, the startup environment:

AICOA doesn’t directly restrict startup acquisitions, but the activities it would restrict most certainly do dramatically affect the incentives that drive many startup acquisitions. If a platform is prohibited from engaging in cross-platform integration of acquired technologies, or if it can’t monetize its purchase by prioritizing its own technology, it may lose the motivation to make a purchase in the first place.

Despite the letter authors’ claims, neither a paucity of avenues for “economic, social, and political activity on the internet” nor the general state of market competition in the United States establishes an “urgent need” to re-write the antitrust laws to saddle a small group of firms with unprecedented legal obligations.

Is the Vagueness of AICOA’s Primary Legal Standard a Feature?

AICOA bars covered platforms from engaging in three broad classes of conduct (self-preferencing, discrimination among business users, and limiting business users’ ability to compete) where the behavior at issue would “materially harm competition.” It then forbids several specific business practices, but allows the defendant to avoid liability by proving that their use of the practice would not cause a “material harm to competition.”

Critics have argued that “material harm to competition”—a standard that is not used elsewhere in the antitrust laws—is too indeterminate to provide business planners and adjudicators with adequate guidance. The authors of the pro-AICOA letter, however, maintain that this “different language is a feature, not a bug.”

That is so, the letter authors say, because the language effectively signals to courts and policymakers that antitrust should prohibit more conduct. They explain:

To clarify to courts and policymakers that Congress wants something different (and stronger), new terminology is required. The bill’s language would open up a new space and move beyond the standards imposed by the Sherman Act, which has not effectively policed digital platforms.

Putting aside the weakness of the letter authors’ premise (i.e., that Sherman Act standards have proven ineffective), the legislative strategy they advocate—obliquely signal that you want “change” without saying what it should consist of—is irresponsible and risky.

The letter authors assert two reasons Congress should not worry about enacting a liability standard that has no settled meaning. One is that:

[t]he same judges who are called upon to render decisions under the existing, insufficient, antitrust regime, will also be called upon to render decisions under the new law. They will be the same people with the same worldview.

It is thus unlikely that “outcomes under the new law would veer drastically away from past understandings of core concepts….”

But this claim undermines the argument that a new standard is needed to get the courts to do “something different” and “move beyond the standards imposed by the Sherman Act.” If we don’t need to worry about an adverse outcome from a novel, ill-defined standard because courts are just going to continue applying the standard they’re familiar with, then what’s the point of changing the standard?

A second reason not to worry about the lack of clarity on AICOA’s key liability standard, the letter authors say, is that federal enforcers will define it:

The new law would mandate that the [Federal Trade Commission and the Antitrust Division of the U.S. Department of Justice], the two expert agencies in the area of competition, together create guidelines to help courts interpret the law. Any uncertainty about the meaning of words like ‘competition’ will be resolved in those guidelines and over time with the development of caselaw.

This is no doubt music to the ears of members of Congress, who love to get credit for “doing something” legislatively, while leaving the details to an agency so that they can avoid accountability if things turn out poorly. Indeed, the letter authors explicitly play upon legislators’ unwholesome desire for credit-sans-accountability. They emphasize that “[t]he agencies must [create and] update the guidelines periodically. Congress doesn’t have to do much of anything very specific other than approve budgets; it certainly has no obligation to enact any new laws, let alone amend them.”

AICOA does not, however, confer rulemaking authority on the agencies; it merely directs them to create and periodically update “agency enforcement guidelines” and “agency interpretations” of certain affirmative defenses. Those guidelines and interpretations would not bind courts, which would be free to interpret AICOA’s new standard differently. The letter authors presume that courts would defer to the agencies’ interpretation of the vague standard, and they probably would. But that raises other problems.

For one thing, it reduces certainty, which is likely to chill innovation. Giving the enforcement agencies de facto power to determine and redetermine what behaviors “would materially harm competition” means that the rules are never settled. Administrations differ markedly in their views about what the antitrust laws should forbid, so business planners could never be certain that a product feature or revenue model that is legal today will not be deemed to “materially harm competition” by a future administration with greater solicitude for small rivals and upstarts. Such uncertainty will hinder investment in novel products, services, and business models.

Consider, for example, Google’s investment in the Android mobile operating system. Google makes money from Android—which it licenses to device manufacturers for free—by ensuring that Google’s revenue-generating services (e.g., its search engine and browser) are strongly preferenced on Android products. One administration might believe that this is a procompetitive arrangement, as it creates a different revenue model for mobile operating systems (as opposed to Apple’s generation of revenue from hardware sales), resulting in both increased choice and lower prices for consumers. A subsequent administration might conclude that the arrangement materially harms competition by making it harder for rival search engines and web browsers to gain market share. It would make scant sense for a covered platform to make an investment like Google did with Android if its underlying business model could be upended by a new administration with de facto power to rewrite the law.

A second problem with having the enforcement agencies determine and redetermine what covered platforms may do is that it effectively transforms the agencies from law enforcers into sectoral regulators. Indeed, the letter authors agree that “the ability of expert agencies to incorporate additional protections in the guidelines” means that “the bill is not a pure antitrust law but also safeguards other benefits to consumers.” They tout that “the complementarity between consumer protection and competition can be addressed in the guidelines.”

Of course, to the extent that the enforcement guidelines address concerns besides competition, they will be less useful for interpreting AICOA’s “material harm to competition” standard; they might deem a practice suspect on non-competition grounds. Moreover, it is questionable whether creating a sectoral regulator for five widely diverse firms is a good idea. The history of sectoral regulation is littered with examples of agency capture, rent-seeking, and other public-choice concerns. At a minimum, Congress should carefully examine the potential downsides of sectoral regulation, install protections to mitigate those downsides, and explicitly establish the sectoral regulator.

Will AICOA Break Popular Products and Services?

Many popular offerings by the platforms covered by AICOA involve self-preferencing, discrimination among business users, or one of the other behaviors the bill presumptively bans. Pre-installation of iPhone apps and services like Siri, for example, involves self-preferencing or discrimination among business users of Apple’s iOS platform. But iPhone consumers value having a mobile device that offers extensive services right out of the box. Consumers love that Google’s search result for an establishment offers directions to the place, which involves the preferencing of Google Maps. And consumers positively adore Amazon Prime, which can provide free expedited delivery because Amazon conditions Prime designation on a third-party seller’s use of Amazon’s efficient, reliable “Fulfillment by Amazon” service—something Amazon could not do under AICOA.

The authors of the pro-AICOA letter insist that the law will not ban attractive product features like these. AICOA, they say:

provides a powerful defense that forecloses any thoughtful concern of this sort: conduct otherwise banned under the bill is permitted if it would ‘maintain or substantially enhance the core functionality of the covered platform.’

But the authors’ confidence that this affirmative defense will adequately protect popular offerings is misplaced. The defense is narrow and difficult to mount.

First, it immunizes only those behaviors that maintain or substantially enhance the “core” functionality of the covered platform. Courts would rightly interpret AICOA to give effect to that otherwise unnecessary word, which dictionaries define as “the central or most important part of something.” Accordingly, any self-preferencing, discrimination, or other presumptively illicit behavior that enhances a covered platform’s service but not its “central or most important” functions is not even a candidate for the defense.

Even if a covered platform could establish that a challenged practice would maintain or substantially enhance the platform’s core functionality, it would also have to prove that the conduct was “narrowly tailored” and “reasonably necessary” to achieve the desired end, and, for many behaviors, the “le[ast] discriminatory means” of doing so. That is a remarkably heavy burden, and it beggars belief to suppose that business planners considering novel offerings involving self-preferencing, discrimination, or some other presumptively illicit conduct would feel confident that they could make the required showing. It is likely, then, that AICOA would break existing products and services and discourage future innovation.

Of course, Congress could mitigate this concern by specifying that AICOA does not preclude certain things, such as pre-installed apps or consumer-friendly search results. But the legislation would then lose the support of the many interest groups who want the law to preclude various popular offerings that its text would now forbid. Unlike consumers, who are widely dispersed and difficult to organize, the groups and competitors that would benefit from things like stripped-down smartphones, map-free search results, and Prime-less Amazon are effective lobbyists.

Should the US Follow Europe?

Having responded to criticisms of AICOA, the authors of the pro-AICOA letter go on offense. They assert that enactment of the bill is needed to ensure that the United States doesn’t lose ground to Europe, both in regulatory leadership and in innovation. Observing that the European Union’s Digital Markets Act (DMA) has just become law, the authors write that:

[w]ithout [AICOA], the role of protecting competition and innovation in the digital sector outside China will be left primarily to the European Union, abrogating U.S. leadership in this sector.

Moreover, if Europe implements its DMA and the United States does not adopt AICOA, the authors claim:

the center of gravity for innovation and entrepreneurship [could] shift from the U.S. to Europe, where the DMA would offer greater protections to start ups and app developers, and even makers and artisans, against exclusionary conduct by the gatekeeper platforms.

Implicit in the argument that AICOA is needed to maintain America’s regulatory leadership is the assumption that to lead in regulatory policy is to have the most restrictive rules. The most restrictive regulator will necessarily be the “leader” in the sense that it will be the one with the most control over regulated firms. But leading in the sense of optimizing outcomes and thereby serving as a model for other jurisdictions entails crafting the best policies—those that minimize the aggregate social losses from wrongly permitting bad behavior, wrongly condemning good behavior, and determining whether conduct is allowed or forbidden (i.e., those that “minimize the sum of error and decision costs”). Rarely is the most restrictive regulatory regime the one that optimizes outcomes, and as I have elsewhere explained, the rules set forth in the DMA hardly seem calibrated to do so.

As for “innovation and entrepreneurship” in the technological arena, it would be a seismic shift indeed if the center of gravity were to migrate to Europe, which is currently home to zero of the top 20 global tech companies. (The United States hosts 12; China, eight.)

It seems implausible, though, that imposing a bunch of restrictions on large tech companies that have significant resources for innovation and are scrambling to enter each other’s markets will enhance, rather than retard, innovation. The self-preferencing bans in AICOA and DMA, for example, would prevent Apple from developing its own search engine to compete with Google, as it has apparently contemplated. Why would Apple develop its own search engine if it couldn’t preference it on iPhones and iPads? And why would Google have started its shopping service to compete with Amazon if it couldn’t preference Google Shopping in search results? And why would any platform continually improve to gain more users as it neared the thresholds for enhanced duties under DMA or AICOA? It seems more likely that the DMA/AICOA approach will hinder, rather than spur, innovation.

At the very least, wouldn’t it be prudent to wait and see whether DMA leads to a flourishing of innovation and entrepreneurship in Europe before jumping on the European bandwagon? After all, technological innovations that occur in Europe won’t be available only to Europeans. Just as Europeans benefit from innovation by U.S. firms, American consumers will be able to reap the benefits of any DMA-inspired innovation occurring in Europe. Moreover, if DMA indeed furthers innovation by making it easier for entrants to gain footing, even American technology firms could benefit from the law by launching their products in Europe. There’s no reason for the tech sector to move to Europe to take advantage of a small-business-protective European law.

In fact, the optimal outcome might be to have one jurisdiction in which major tech platforms are free to innovate, enter each other’s markets via self-preferencing, etc. (the United States, under current law) and another that is more protective of upstart businesses that use the platforms (Europe under DMA). The former jurisdiction would create favorable conditions for platform innovation and inter-platform competition; the latter might enhance innovation among businesses that rely on the platforms. Consumers in each jurisdiction, however, would benefit from innovation facilitated by the other.

It makes little sense, then, for the United States to rush to adopt European-style regulation. DMA is a radical experiment. Regulatory history suggests that the sort of restrictiveness it imposes retards, rather than furthers, innovation. But in the unlikely event that things turn out differently this time, little harm would result from waiting to see DMA’s benefits before implementing its restrictive approach. 

Does AICOA Threaten Platforms’ Ability to Moderate Content and Police Disinformation?

The authors of the pro-AICOA letter conclude by addressing the concern that AICOA “will inadvertently make content moderation difficult because some of the prohibitions could be read… to cover and therefore prohibit some varieties of content moderation” by covered platforms.

The letter authors say that a reading of AICOA to prohibit content moderation is “strained.” They maintain that the act’s requirement of “competitive harm” would prevent imposition of liability based on content moderation and that the act is “plainly not intended to cover” instances of “purported censorship.” They further contend that the risk of judicial misconstrual exists with all proposed laws and therefore should not be a sufficient reason to oppose AICOA.

Each of these points is weak. Section 3(a)(3) of AICOA makes it unlawful for a covered platform to “discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition.” It is hardly “strained” to reason that this provision is violated when, say, Google’s YouTube selectively demonetizes a business user for content that Google deems harmful or misleading. Or when Apple removes Parler, but not every other violator of service terms, from its App Store. Such conduct could “materially harm competition” by impeding the de-platformed business’ ability to compete with its rivals.

And it is hard to say that AICOA is “plainly not intended” to forbid these acts when a key supporting senator touted the bill as a means of policing content moderation and observed during markup that it would “make some positive improvement on the problem of censorship” (i.e., content moderation) because “it would provide protections to content providers, to businesses that are discriminated against because of the content of what they produce.”

At a minimum, we should expect some state attorneys general to try to use the law to police content moderation they disfavor, and the mere prospect of such legal action could chill anti-disinformation efforts and other forms of content moderation.

Of course, there’s a simple way for Congress to eliminate the risk of what the letter authors deem judicial misconstrual: It could clarify that AICOA’s prohibitions do not cover good-faith efforts to moderate content or police disinformation. Such clarification, however, would kill the bill, as several Republican legislators are supporting the act because it restricts content moderation.

The risk of judicial misconstrual with AICOA, then, is not the sort that exists with “any law, new or old,” as the letter authors contend. “Normal” misconstrual risk exists when legislators try to be clear about their intentions but, because language has its limits, some vagueness or ambiguity persists. AICOA’s architects have deliberately obscured their intentions in order to cobble together enough supporters to get the bill across the finish line.

The one thing that all AICOA supporters can agree on is that they deserve credit for “doing something” about Big Tech. If the law is construed in a way they disfavor, they can always act shocked and blame rogue courts. That’s shoddy, cynical lawmaking.

Conclusion

So, I respectfully disagree with Professors Scott Morton, Salop, and Dinielli on AICOA. There is no urgent need to pass the bill right now, especially as we are on the cusp of seeing an AICOA-like regime put to the test. The bill’s central liability standard is overly vague, and its plain terms would break popular products and services and thwart future innovation. The United States should equate regulatory leadership with the best, not the most restrictive, policies. And Congress should thoroughly debate and clarify its intentions on content moderation before enacting legislation that could upend the status quo on that important matter.

For all these reasons, Congress should reject AICOA. And for the same reasons, a future in which AICOA is adopted is extremely unlikely to resemble the Utopian world that Professors Scott Morton, Salop, and Dinielli imagine.

Welcome to the Truth on the Market FTC UMC Roundup for May 27, 2022. This week we have (Hail Mary?) revisions to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act, initiatives that can’t decide whether they belong in Congress or the Federal Trade Commission, and yet more commentary on inflation and antitrust, along with a twist ending.

This Week’s Headline

Sen. Klobuchar has shared a revised version of her proposed American Innovation and Choice Online Act. What’s different? Not much. The main change is that several industries—banks and telecom, notably—are excluded from coverage. That was probably an effort to win some Republican votes for the bill. But headed into the midterms. it appears some congressional Democrats view this more as a poison pill than a good bill—one they don’t think their constituents are willing to swallow.

Back at the FTC, the commission has announced that it will investigate the recent shortage of infant formula. This could focus on both consumer protection and competition issues. The market for infant formula in the United States is both fairly concentrated and also highly regulated. There are lots of interesting issues here (reminder to any academics reading this, we have an open call for papers for research relating to market-structuring regulation). 

The blurry line between FTC and Congress remains blurry. The FTC’s call for comments relating to pharmacy benefit managers (PBMs) closed this week, with more than 500 comments, at the same time that bipartisan legislation relating to PBMs has been introduced. And Sens. Mike Rounds (R-S.D.) and Elizabeth Warren (D-Mass.) want the FTC to investigate price fixing in the beef industry.

Concentrating a bit on big-picture policy issues, the number of friends Larry Summers has in the White House is shrinking faster than the dollar, as he worries about the embrace of “hipster antitrust,” including that the administration’s antitrust policy is driving inflation. On the other side of the inflation-antitrust ledger, economists at the Boston Federal Reserve Bank released a paper arguing that high concentration increases inflation. Among others, ICLE Chief Economist Brian Albrecht calls foul. Still on the inflation beat, it’s no secret that the biggest tech companies hold a lot of cash. Some may wonder, with the cost of holding cash so high, is a buying spree on the horizon? (Answer: not if the FTC keeps holding up mergers!)

A Few Quick Hits

Former FTC Commissioner Josh Wright and former commission staffer Derek Moore reflect on FTC morale. And Howard Beales and former FTC Chair Tim Muris wonder whether the “national nanny” is back on the beat.

It’s consumer protection, not antitrust, news but Twitter has been hit with a $150 million fine for doing bad stuff with user data between 2013 and 2019. Perhaps DuckDuckGo will be up next for the FTC. It turns out that the browser built on promises that it doesn’t track you has a deal with Microsoft to let Microsoft track you. That gives us an excuse to mention the FTC’s call for presentations for PrivacyCon 2022.

In international news, the United Kingdom’s Competition and Markets Authority has opened a second investigation into Google’s AdTech practices. And Shane Tewes of the American Enterprise Institute has a nice discussion with Peter Brown from the European Paliament’s liaison office about American versus European approaches to technology policy.

We close with a twist ending: One of the concerns that critics of the FTC’s newfound embrace of its UMC authority have is that expansive vague authority given to regulators enables a flabby useless government that is paradoxically too powerful. Which is why it’s interesting to see Matt Stoller of the American Economic Liberties Project, of all people, express that concern. Strange bedfellows indeed!

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

The tentatively pending sale of Twitter to Elon Musk has been greeted with celebration by many on the right, along with lamentation by some on the left, regarding what it portends for the platform’s moderation policies. Musk, for his part, has announced that he believes Twitter should be a free-speech haven and that it needs to dial back the (allegedly politically biased) moderation in which it has engaged.

The good news for everyone is that a differentiated product at Twitter could be exactly what the market―and the debate over Big Tech―needs.

The Market for Speech Governance

As I’ve written previously, the First Amendment (bolstered by Section 230 of the Communications Decency Act) protects not only speech itself, but also the private ordering of speech. “Congress shall make no law… abridging the freedom of speech” means that state actors can’t infringe speech, but it also (in most cases) protects private actors’ ability to make such rules free from government regulation. As the Supreme Court has repeatedly held, private actors can make their own rules about speech on their own property.

As Justice Brett Kavanaugh put it on behalf of the Court in Manhattan Community Access Corp. v. Halleck:

[W]hen a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum…

In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.

In other words, as much as it protects “the marketplace of ideas,” the First Amendment also protects “the market for speech governance.” Musk’s idea that Twitter should be subject to the First Amendment is simply incoherent, but his vision for Twitter to have less politically biased content moderation could work.

Musk’s Plan for Twitter

There has been much commentary on what Musk intends to do, and whether it is a realistic way to maximize the platform’s value. As a multi-sided platform, Twitter’s revenue is driven by advertisers, who want to reach a mass audience. This means Twitter, much like other social-media platforms, must consider the costs and benefits of speech to its users, and strike a balance that maximizes the value of the platform. The history of social-media content moderation suggests that these platforms have found that rules against harassment, abuse, spam, bots, pornography, and certain hate speech and misinformation are necessary.

For rules pertaining to harassment and abuse, in particular, it is easy to understand how they are necessary to prevent losing users. There seems to be a wide societal consensus that such speech is intolerable. Similarly, spam, bots, and pornographic content, even if legal speech, are largely not what social media users want to see.

But for hate speech and misinformation, however much one agrees in the abstract about their undesirableness, there is significant debate on the margins about what is acceptable or unacceptable discourse, just as there is over what is true or false when it comes to touchpoint social and political issues. It is one thing to ban Nazis due to hate speech; it is arguably quite another to remove a prominent feminist author due to “misgendering” people. It is also one thing to say crazy conspiracy theories like QAnon should be moderated, but quite another to fact-check good-faith questioning of the efficacy of masks or vaccines. It is likely in these areas that Musk will offer an alternative to what is largely seen as biased content moderation from Big Tech companies.

Musk appears to be making a bet that the market for speech governance is currently not well-served by the major competitors in the social-media space. If Twitter could thread the needle by offering a more politically neutral moderation policy that still manages to keep off the site enough of the types of content that repel users, then it could conceivably succeed and even influence the moderation policies of other social-media companies.

Let the Market Decide

The crux of the issue is this: Conservatives who have backed antitrust and regulatory action against Big Tech because of political bias concerns should be willing to back off and allow the market to work. And liberals who have defended the right of private companies to make rules for their platforms should continue to defend that principle. Let the market decide.

A bipartisan group of senators unveiled legislation today that would dramatically curtail the ability of online platforms to “self-preference” their own services—for example, when Apple pre-installs its own Weather or Podcasts apps on the iPhone, giving it an advantage that independent apps don’t have. The measure accompanies a House bill that included similar provisions, with some changes.

1. The Senate bill closely resembles the House version, and the small improvements will probably not amount to much in practice.

The major substantive changes we have seen between the House bill and the Senate version are:

  1. Violations in Section 2(a) have been modified to refer only to conduct that “unfairly” preferences, limits, or discriminates between the platform’s products and others, and that “materially harm[s] competition on the covered platform,” rather than banning all preferencing, limits, or discrimination.
  2. The evidentiary burden required throughout the bill has been changed from  “clear and convincing” to a “preponderance of evidence” (in other words, greater than 50%).
  3. An affirmative defense has been added to permit a platform to escape liability if it can establish that challenged conduct that “was narrowly tailored, was nonpretextual, and was necessary to… maintain or enhance the core functionality of the covered platform.”
  4. The minimum market capitalization for “covered platforms” has been lowered from $600 billion to $550 billion.
  5. The Senate bill would assess fines of 15% of revenues from the period during which the conduct occurred, in contrast with the House bill, which set fines equal to the greater of either 15% of prior-year revenues or 30% of revenues from the period during which the conduct occurred.
  6. Unlike the House bill, the Senate bill does not create a private right of action. Only the U.S. Justice Department (DOJ), Federal Trade Commission (FTC), and state attorneys-generals could bring enforcement actions on the basis of the bill.

Item one here certainly mitigates the most extreme risks of the House bill, which was drafted, bizarrely, to ban all “preferencing” or “discrimination” by platforms. If that were made law, it could literally have broken much of the Internet. The softened language reduces that risk somewhat.

However, Section 2(b), which lists types of conduct that would presumptively establish a violation under Section 2(a), is largely unchanged. As outlined here, this would amount to a broad ban on a wide swath of beneficial conduct. And “unfair” and “material” are notoriously slippery concepts. As a practical matter, their inclusion here may not significantly alter the course of enforcement under the Senate legislation from what would ensue under the House version.

Item three, which allows challenged conduct to be defended if it is “necessary to… maintain or enhance the core functionality of the covered platform,” may also protect some conduct. But because the bill requires companies to prove that challenged conduct is not only beneficial, but necessary to realize those benefits, it effectively implements a “guilty until proven innocent” standard that is likely to prove impossible to meet. The threat of permanent injunctions and enormous fines will mean that, in many cases, companies simply won’t be able to justify the expense of endeavoring to improve even the “core functionality” of their platforms in any way that could trigger the bill’s liability provisions. Thus, again, as a practical matter, the difference between the Senate and House bills may be only superficial.

The effect of this will likely be to diminish product innovation in these areas, because companies could not know in advance whether the benefits of doing so would be worth the legal risk. We have previously highlighted existing conduct that may be lost if a bill like this passes, such as pre-installation of apps or embedding maps and other “rich” results in boxes on search engine results pages. But the biggest loss may be things we don’t even know about yet, that just never happen because the reward from experimentation is not worth the risk of being found to be “discriminating” against a competitor.

We dove into the House bill in Breaking Down the American Choice and Innovation Online Act and Breaking Down House Democrats’ Forthcoming Competition Bills.

2. The prohibition on “unfair self-preferencing” is vague and expansive and will make Google, Amazon, Facebook, and Apple’s products worse. Consumers don’t want digital platforms to be dumb pipes, or to act like a telephone network or sewer system. The Internet is filled with a superabundance of information and options, as well as a host of malicious actors. Good digital platforms act as middlemen, sorting information in useful ways and taking on some of the risk that exists when, inevitably, we end up doing business with untrustworthy actors.

When users have the choice, they tend to prefer platforms that do quite a bit of “discrimination”—that is, favoring some sellers over others, or offering their own related products or services through the platform. Most people prefer Amazon to eBay because eBay is chaotic and riskier to use.

Competitors that decry self-preferencing by the largest platforms—integrating two different products with each other, like putting a maps box showing only the search engine’s own maps on a search engine results page—argue that the conduct is enabled only by a platform’s market dominance and does not benefit consumers.

Yet these companies often do exactly the same thing in their own products, regardless of whether they have market power. Yelp includes a map on its search results page, not just restaurant listings. DuckDuckGo does the same. If these companies offer these features, it is presumably because they think their users want such results. It seems perfectly plausible that Google does the same because it thinks its users—literally the same users, in most cases—also want them.

Fundamentally, and as we discuss in Against the Vertical Disrcimination Presumption, there is simply no sound basis to enact such a bill (even in a slightly improved version):

The notion that self-preferencing by platforms is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true. In reality, platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

We discussed self-preferencing further in Platform Self-Preferencing Can Be Good for Consumers and Even Competitors, and showed that platform “discrimination” is often what consumers want from digital platforms in On the Origin of Platforms: An Evolutionary Perspective.

3. The bill massively empowers an FTC that seems intent to use antitrust to achieve political goals. The House bill would enable competitors to pepper covered platforms with frivolous lawsuits. The bill’s sponsors presumably hope that removing the private right of action will help to avoid that. But the bill still leaves intact a much more serious risk to the rule of law: the bill’s provisions are so broad that federal antitrust regulators will have enormous discretion over which cases they take.

This means that whoever is running the FTC and DOJ will be able to threaten covered platforms with a broad array of lawsuits, potentially to influence or control their conduct in other, unrelated areas. While some supporters of the bill regard this as a positive, most antitrust watchers would greet this power with much greater skepticism. Fundamentally, both bills grant antitrust enforcers wildly broad powers to pursue goals unrelated to competition. FTC Chair Lina Khan has, for example, argued that “the dispersion of political and economic control” ought to be antitrust’s goal. Commissioner Rebecca Kelly-Slaughter has argued that antitrust should be “antiracist”.

Whatever the desirability of these goals, the broad discretionary authority the bills confer on the antitrust agencies means that individual commissioners may have significantly greater scope to pursue the goals that they believe to be right, rather than Congress.

See discussions of this point at What Lina Khan’s Appointment Means for the House Antitrust Bills, Republicans Should Tread Carefully as They Consider ‘Solutions’ to Big Tech, The Illiberal Vision of Neo-Brandeisian Antitrust, and Alden Abbott’s discussion of FTC Antitrust Enforcement and the Rule of Law.

4. The bill adopts European principles of competition regulation. These are, to put it mildly, not obviously conducive to the sort of innovation and business growth that Americans may expect. Europe has no tech giants of its own, a condition that shows little sign of changing. Apple, alone, is worth as much as the top 30 companies in Germany’s DAX index, and the top 40 in France’s CAC index. Landmark European competition cases have seen Google fined for embedding Shopping results in the Search page—not because it hurt consumers, but because it hurt competing pricecomparison websites.

A fundamental difference between American and European competition regimes is that the U.S. system is far more friendly to businesses that obtain dominant market positions because they have offered better products more cheaply. Under the American system, successful businesses are normally given broad scope to charge high prices and refuse to deal with competitors. This helps to increase the rewards and incentive to innovate and invest in order to obtain that strong market position. The European model is far more burdensome.

The Senate bill adopts a European approach to refusals to deal—the same approach that led the European Commission to fine Microsoft for including Windows Media Player with Windows—and applies it across Big Tech broadly. Adopting this kind of approach may end up undermining elements of U.S. law that support innovation and growth.

For more, see How US and EU Competition Law Differ.

5. The proposals are based on a misunderstanding of the state of competition in the American economy, and of antitrust enforcement. It is widely believed that the U.S. economy has seen diminished competition. This is mistaken, particularly with respect to digital markets. Apparent rises in market concentration and profit margins disappear when we look more closely: local-level concentration is falling even as national-level concentration is rising, driven by more efficient chains setting up more stores in areas that were previously served by only one or two firms.

And markup rises largely disappear after accounting for fixed costs like R&D and marketing.

Where profits are rising, in areas like manufacturing, it appears to be mainly driven by increased productivity, not higher prices. Real prices have not risen in line with markups. Where profitability has increased, it has been mainly driven by falling costs.

Nor have the number of antitrust cases brought by federal antitrust agencies fallen. The likelihood of a merger being challenged more than doubled between 1979 and 2017. And there is little reason to believe that the deterrent effect of antitrust has weakened. Many critics of Big Tech have decided that there must be a problem and have worked backwards from that conclusion, selecting whatever evidence supports it and ignoring the evidence that does not. The consequence of such motivated reasoning is bills like this.

See Geoff’s April 2020 written testimony to the House Judiciary Investigation Into Competition in Digital Markets here.

[The following post was adapted from the International Center for Law & Economics White Paper “Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?]

Words can wound. They can humiliate, anger, insult.

University students—or, at least, a vociferous minority of them—are keen to prevent this injury by suppressing offensive speech. To ensure campuses are safe places, they militate for the cancellation of talks by speakers with opinions they find offensive, often successfully. And they campaign to get offensive professors fired from their jobs.

Off campus, some want this safety to be extended to the online world and, especially, to the users of social media platforms such as Twitter and Facebook. In the United States, this would mean weakening the legal protections of offensive speech provided by Section 230 of the Communications Decency Act (as President Joe Biden has recommended) or by the First Amendment and. In the United Kingdom, the Online Safety Bill is now before Parliament. If passed, it will give a U.K. government agency the power to dictate the content-moderation policies of social media platforms.

You don’t need to be a woke university student or grandstanding politician to suspect that society suffers from an overproduction of offensive speech. Basic economics provides a reason to suspect it—the reason being that offense is an external cost of speech. The cost is borne not by the speaker but by his audience. And when people do not bear all the costs of an action, they do it too much.

Jack tweets “women don’t have penises.” This offends Jill, who is someone with a penis who considers herself (or himself, if Jack is right) to be a woman. And it offends many others, who agree with Jill that Jack is indulging in ugly transphobic biological essentialism. Lacking Bill Clinton’s facility for feeling the pain of others, Jack does not bear this cost. So, even if it exceeds whatever benefit Jack gets from saying that women don’t have penises, he will still say it. In other words, he will say it even when doing so makes society altogether worse off.

It shouldn’t be allowed!

That’s what we normally say when actions harm others more than they benefit the agent. The law normally conforms to John Stuart Mill’s “Harm Principle” by restricting activities—such as shooting people or treating your neighbours to death metal at 130 decibels at 2 a.m.—with material external costs. Those who seek legal reform to restrict offensive speech are surely doing no more than following an accepted general principle.

But it’s not so simple. As Ronald Coase pointed out in his famous 1960 article “The Problem of Social Cost,” externalities are a reciprocal problem. If Wayne had no neighbors, his playing death metal at 130 decibels at 2 a.m. would have no external costs. Their choice of address is equally a source of the problem. Similarly, if Jill weren’t a Twitter user, she wouldn’t have been offended by Jack’s tweet about who has a penis, since she wouldn’t have encountered it. Externalities are like tangos: they always have at least two perpetrators.

So, the legal question, “who should have a right to what they want?”—Wayne to his loud music or his neighbors to their sleep; Jack to expressing his opinion about women or Jill to not hearing such opinions—cannot be answered by identifying the party who is responsible for the external cost. Both parties are responsible.

How, then, should the question be answered? In the same paper, Coase the showed that, in certain circumstances, who the courts favor will make no difference to what ends up happening, and that what ends up happening will be efficient. Suppose the court says that Wayne cannot bother his neighbors with death metal at 2 a.m. If Wayne would be willing to pay $100,000 to keep doing it and his neighbors, combined, would put up with it for anything more than $95,000, then they should be able to arrive at a mutually beneficial deal whereby Wayne pays them something between $95,000 and $100,000 to forgo their right to stop him making his dreadful noise.

That’s not exactly right. If negotiating a deal would cost more than $5,000, then no mutually beneficial deal is possible and the rights-trading won’t happen. Transaction costs being less than the difference between the two parties’ valuations is the circumstance in which the allocation of legal rights makes no difference to how resources get used, and where efficiency will be achieved, in any event.

But it is an unusual circumstance, especially when the external cost is suffered by many people. When the transaction cost is too high, efficiency does depend on the allocation of rights by courts or legislatures. As Coase argued, when this is so, efficiency will be served if a right to the disputed resource is granted to the party with the higher cost of avoiding the externality.

Given the (implausible) valuations Wayne and his neighbors place on the amount of noise in their environment at 2 a.m., efficiency is served by giving Wayne the right to play his death metal, unless he could soundproof his house or play his music at a much lower volume or take some other avoidance measure that costs him less than the $90,000 cost to his neighbours.

And given that Jack’s tweet about penises offends a large open-ended group of people, with whom Jack therefore cannot negotiate, it looks like they should be given the right not to be offended by Jack’s comment and he should be denied the right to make it. Coasean logic supports the woke censors!          

But, again, it’s not that simple—for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people take masochistic pleasure in being offended. You can tell they do, because they actively seek out the sources of their suffering. They are genuinely offended, but the offense isn’t harming them, just as the sexual masochist really is in physical pain but isn’t harmed by it. Indeed, real pain and real offense are required, respectively, for the satisfaction of the sexual masochist and the offense masochist.

How many of the offended are offense masochists? Where the offensive speech can be avoided at minimal cost, the answer must be most. Why follow Jordan Peterson on Twitter when you find his opinions offensive unless you enjoy being offended by him? Maybe some are keeping tabs on the dreadful man so that they can better resist him, and they take the pain for that reason rather than for masochistic glee. But how could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms—the venues of offensive speech that they seek to regulate—are privately owned. To see why this is significant, consider not offensive speech, but an offensive action, such as openly masturbating on a bus.

This is prohibited by law. But it is not the mere act that is illegal. You are allowed to masturbate in the privacy of your bedroom. You may not masturbate on a bus because those who are offended by the sight of it cannot easily avoid it. That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances—laws against public nuisance, harassment, public indecency, etc.—are generally efficient. The cost they impose on the offenders is less than the benefits to the offended.

But they are unnecessary when the giving and taking of offense occur within a privately owned place. Suppose no law prohibited masturbating on a bus. It still wouldn’t be allowed on buses owned by a profit-seeker. Few people want to masturbate on buses and most people who ride on buses seek trips that are masturbation-free. A prohibition on masturbation will gain the owner more customers than it loses him. The prohibition is simply another feature of the product offered by the bus company. Nice leather seats, punctual departures, and no wankers (literally). There is no more reason to believe that the bus company’s passenger-conduct rules will be inefficient than that its other product features will be and, therefore, no more reason to legally stipulate them.

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

Of course, the owner of a social media platform might not be a pure profit-maximiser. For example, he might forgo $10 million in advertising revenue for the sake of banning speakers he personally finds offensive. But the outcome is still efficient. Allowing the speech would have cost more by way of the owner’s unhappiness than the lost advertising would have been worth.  And such powerful feelings in the owner of a platform create an opportunity for competitors who do not share his feelings. They can offer a platform that does not ban the offensive speakers and, if enough people want to hear what they have to say, attract users and the advertising revenue that comes with them. 

If efficiency is your concern, there is no problem for the authorities to solve. Indeed, the idea that the authorities would do a better job of deciding content-moderation rules is not merely absurd, but alarming. Politicians and the bureaucrats who answer to them or are appointed by them would use the power not to promote efficiency, but to promote agendas congenial to them. Jurisprudence in liberal democracies—and, especially, in America—has been suspicious of governmental control of what may be said. Nothing about social media provides good reason to become any less suspicious.

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, and Fahrenheit 451. Though these novels often shed light on the risks of contemporary society and the zeitgeist of the era in which they were written, they also almost always systematically overshoot the mark (intentionally or not) and severely underestimate the radical improvements that stem from the technologies (or other causes) that they fear.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. This is epitomized by influential publications such as The Club of Rome’s 1972 report The Limits of Growth, whose dire predictions of Malthusian catastrophe have largely failed to materialize.

In an article recently published in the George Mason Law Review, we argue that contemporary antitrust scholarship and commentary is similarly afflicted by dystopian thinking. In that respect, today’s antitrust pessimists have set their sights predominantly on the digital economy—”Big Tech” and “Big Data”—in the process of alleging a vast array of potential harms.

Scholars have notably argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and to more concentrated markets (e.g., here and here). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time.

Some have gone so far as to argue that this threatens the very fabric of western democracy. For instance, parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. The gaming company released a short video clip parodying Apple’s famous “1984” ad (which, upon its release, was itself widely seen as a critique of the tech incumbents of the time). Similarly, a piece in the New Statesman—titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy”—concluded that:

Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?

In our article, we argue that these fears are symptomatic of two different but complementary phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.”

Antitrust Dystopia is the pessimistic tendency among competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Antitrust Nostalgia is the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that, because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “nonstandard” business arrangements) when change is, to a first approximation, the hallmark of competition itself.

Our article argues that these two worldviews are premised on particularly questionable assumptions about the way competition unfolds, in this case, in data-intensive markets.

The Case of Big Data Competition

The notion that digital markets are inherently more problematic than their brick-and-mortar counterparts—if there even is a meaningful distinction—is advanced routinely by policymakers, journalists, and other observers. The fear is that, left to their own devices, today’s dominant digital platforms will become all-powerful, protected by an impregnable “data barrier to entry.” Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the nonstandard business models and contractual arrangements that characterize these markets.

But as our paper demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or the proposed interventions.

  1. Data is information

One of the most salient features of the data created and consumed by online firms is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable). As the National Bureau of Economic Research’s Catherine Tucker argues, data “has near-zero marginal cost of production and distribution even over long distances,” making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

As we explain in our paper, these features make the nature of modern data almost irreconcilable with the alleged hoarding and dominance that critics routinely associate with the tech industry.

2. Data is not scarce; expertise is

Another important feature of data is that it is ubiquitous. The predominant challenge for firms is not so much in obtaining data but, rather, in drawing useful insights from it. This has two important implications for antitrust policy.

First, although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As our survey of the empirical literature shows, data generally entails diminishing marginal returns:

Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

This dynamic can be seen at play in the early days of the search-engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.” By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo.

Even if it stumbled upon it by chance, Google immediately identified a winning formula for the search-engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time—thanks, in part, to Google’s PageRank algorithm.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search-engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions: notably, the introduction of Gmail, its acquisitions of YouTube and Android, and the introduction of Google Maps, among others.

Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

3.    Data as a byproduct of, and path to, platform monetization

Policymakers should also bear in mind that platforms often must go to great lengths in order to create data about their users—data that these same users often do not know about themselves. Under this framing, data is a byproduct of firms’ activity, rather than an input necessary for rivals to launch a business.

This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately played a role in the monetization of these platforms, it does not appear that it was necessary for their creation.

Data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues.

It took five years from its launch for Facebook to start making a profit. Even at that point, when the platform had 300 million users, it still was not entirely clear whether it would generate most of its income from app sales or online advertisements. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads. During this eight-year timespan, Facebook prioritized user growth over the monetization of its platform. The company appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make itself highly profitable.

This might explain how Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace. And Facebook is no outlier. The list of companies that prevailed despite starting with little to no data (and initially lacking a data-dependent monetization strategy) is lengthy. Other examples include TikTok, Airbnb, Amazon, Twitter, PayPal, Snapchat, and Uber.

Those who complain about the unassailable competitive advantages enjoyed by companies with troves of data have it exactly backward. Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the Internet). These included the Internet browser and media-player markets.

The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

However, as we show in our article, though there were numerous calls for authorities to adopt a precautionary principle-type approach to dealing with Microsoft—and antitrust enforcers were more than receptive to these calls—critics’ worst fears never came to be.

This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic.

This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and nonstandard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing. This is supported by several key factors.

First, the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

Note that, if this assertion is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on either side of the Atlantic, nor even to adopt a particularly aggressive enforcement position. The remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies actually imposed.

Second, Microsoft lost its bottleneck position. One of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the Internet. Indeed, as recently as January 2009, roughly 94% of all Internet traffic came from Windows-based computers. Just over a decade later, this number has fallen to about 31%. Android, iOS, and OS X have shares of roughly 41%, 16%, and 7%, respectively. Consumers can thus access the web via numerous platforms. The emergence of these alternatives reduced the extent to which Microsoft could use its bottleneck position to force its services on consumers in online markets.

Third, it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. Microsoft thus created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants, in fact, faced lower barriers with respect to app developers than did Microsoft when it entered.

In short, new entrants may face even more welcoming environments because of incumbents. This enabled Microsoft’s rivals to thrive.

Conclusion

Dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple. While it is easy to identify what makes dominant firms successful in the present (i.e., what enables them to hold off competitors in the short term), it is almost impossible to conceive of the myriad ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors.

Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases or for some period of time, but as our article argues, it is bound to happen far less often than pessimists fear.

In short, dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time, much less leverage this advantage in adjacent markets.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second World War. This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse than it is.

In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

PHOTO: C-Span

Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints. 

Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.

This is a critical time for the FTC. It has lost a number of high-profile lawsuits and is preparing to expand its rulemaking powers to regulate things like employment contracts and businesses’ use of data. Khan has also argued in favor of additional rulemaking powers around “unfair methods of competition.”

As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee. 

The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined. 

In fact, the bill’s scope is so broad that some have argued that the FTC simply would not challenge “innocuous self-preferencing” like, say, Apple pre-installing Apple Music on iPhones. Economist Hal Singer has defended the proposals on the grounds that, “Due to limited resources, not all platform integration will be challenged.” 

But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively. 

Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.

The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.

This approach is common in countries with weak rule of law. Anti-corruption laws are frequently used to punish opponents of the regime in China, who probably are also corrupt, but are prosecuted because they have challenged the regime in some way. Hong Kong’s National Security law has also been used to target peaceful protestors and critical media thanks to its vague and overly broad drafting. 

Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’

The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC. 

This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.

Khan, as a critic of the Consumer Welfare Standard, believes that antitrust ought to be used to pursue non-economic objectives, including “the dispersion of political and economic control.” She, and the FTC under her, may wish to use this discretionary power to prosecute firms that she feels are hurting society for unrelated reasons, such as because of political stances they have (or have not) taken.

Khan’s fellow commissioner, Rebecca Kelly Slaughter, has argued that antitrust should be “antiracist”; that “as long as Black-owned businesses and Black consumers are systematically underrepresented and disadvantaged, we know our markets are not fair”; and that the FTC should consider using its existing rulemaking powers to address racist practices. These may be desirable goals, but their application would require contentious value judgements that lawmakers may not want the FTC to make.

Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:

“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]

Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.

It is concerning enough that this legislative package would prohibit conduct that is good for consumers, and that actually increases the competition faced by Big Tech firms. Congress should understand that it also gives extensive discretionary powers to an agency intent on using them to pursue broad, political goals. If Khan’s appointment as chair was a surprise, what her FTC does with the new powers given to her by Congress should not be.

Economist Josh Hendrickson asserts that the Jones Act is properly understood as a Coasean bargain. In this view, the law serves as a subsidy to the U.S. maritime industry through its restriction of waterborne domestic commerce to vessels that are constructed in U.S. shipyards, U.S.-flagged, and U.S.-crewed. Such protectionism, it is argued, provides the government with ready access to these assets, rather than taking precious time to build them up during times of conflict.

We are skeptical of this characterization.

Although there is an implicit bargain behind the Jones Act, its relationship to the work of Ronald Coase is unclear. Coase is best known for his theorem on the use of bargains and exchanges to reduce negative externalities. But the negative externality is that the Jones Act attempts to address is not apparent. While it may be more efficient or effective than the government building up its own shipbuilding, vessels, and crew in times of war, that’s rather different than addressing an externality. The Jones Act may reflect an implied exchange between the domestic maritime industry and government, but there does not appear to be anything particularly Coasean about it.

Rather, close scrutiny reveals this arrangement between government and industry to be a textbook example of policy failure and rent-seeking run amok. The Jones Act is not a bargain, but a rip-off, with costs and benefits completely out of balance.

The Jones Act and National Defense

For all of the talk of the Jones Act’s critical role in national security, its contributions underwhelm. Ships offer a case in point. In times of conflict, the U.S. military’s primary sources of transport are not Jones Act vessels but government-owned ships in the Military Sealift Command and Ready Reserve Force fleets. These are further supplemented by the 60 non-Jones Act U.S.-flag commercial ships enrolled in the Maritime Security Program, a subsidy arrangement by which ships are provided $5 million per year in exchange for the government’s right to use them in time of need.

In contrast, Jones Act ships are used only sparingly. That’s understandable, as removing these vessels from domestic trade would leave a void in the country’s transportation needs not easily filled.

The law’s contributions to domestic shipbuilding are similarly meager. if not outright counterproductive. A mere two to three large, oceangoing commercial ships are delivered by U.S. shipyards per year. That’s not per shipyard, but all U.S. shipyards combined.

Given the vastly uncompetitive state of domestic shipbuilding—a predictable consequence of handing the industry a captive domestic market via the Jones Act’s U.S.-built requirement—there is a little appetite for what these shipyards produce. As Hendrickson himself points out, the domestic build provision serves to “discourage shipbuilders from innovating and otherwise pursuing cost-saving production methods since American shipbuilders do not face international competition.” We could not agree more.

What keeps U.S. shipyards active and available to meet the military’s needs is not work for the Jones Act commercial fleet but rather government orders. A 2015 Maritime Administration report found that such business accounts for 70 percent of revenue for the shipbuilding and repair industry. A 2019 American Enterprise Institute study concluded that, among U.S. shipbuilders that construct both commercial and military ships, Jones Act vessels accounted for less than 5 percent of all shipbuilding orders.

If the Jones Act makes any contributions of note at all, it is mariners. Of those needed to crew surge sealift ships during times of war, the Jones Act fleet is estimated to account for 29 percent. But here the Jones Act also acts as a double-edged sword. By increasing the cost of ships to four to five times the world price, the law’s U.S.-built requirement results in a smaller fleet with fewer mariners employed than would otherwise be the case. That’s particularly noteworthy given government calculations that there is a deficit of roughly 1,800 mariners to crew its fleet in the event of a sustained sealift operation.

Beyond its ruinous impact on the competitiveness of domestic shipbuilding, the Jones Act has had other deleterious consequences for national security. The increased cost of waterborne transport, or its outright impossibility in the case of liquefied natural gas and propane, results in reduced self-reliance for critical energy supplies. This is a sufficiently significant issue that members of the National Security Council unsuccessfully sought a long-term Jones Act waiver in 2019. The law also means fewer redundancies and less flexibility in the country’s transportation system when responding to crises, both natural and manmade. Waivers of the Jones Act can be issued, but this highly politicized process eats up precious days when time is of the essence. All of these factors merit consideration in the overall national security calculus.

To review, the Jones Act’s opaque and implicit subsidy—doled out via protectionism—results in anemic and uncompetitive shipbuilding, few ships available in time of war, and fewer mariners than would otherwise be the case without its U.S.-built requirement. And it has other consequences for national security that are not only underwhelming but plainly negative. Little wonder that Hendrickson concedes it is unclear whether U.S. maritime policy—of which the Jones Act plays a foundational role—achieves its national security goals.

The toll exacted in exchange for the Jones Act’s limited benefits, meanwhile, is considerable. According to a 2019 OECD study, the law’s repeal would increase domestic value added by $19-$64 billion. Incredibly, that estimate may actually understate matters. Not included in this estimate are related costs such as environmental degradation, increased congestion and highway maintenance, and retaliation from U.S. trade partners during free-trade agreement negotiations due to U.S. unwillingness to liberalize the Jones Act.

Against such critiques, Hendrickson posits that substantial cost savings are illusory due to immigration and other U.S. laws. But how big a barrier such laws would pose is unclear. It’s worth considering, for example, that cruise ships with foreign crews are able to visit multiple U.S. ports so long as a foreign port is also included on the voyage. The granting of Jones Act waivers, meanwhile, has enabled foreign ships to transport cargo between U.S. ports in the past despite U.S. immigration laws.

Would Chinese-flagged and crewed barges be able to engage in purely domestic trade on the Mississippi River absent the Jones Act? Almost certainly not. But it seems perfectly plausible that foreign ships already sailing between U.S. ports as part of international voyages—a frequent occurrence—could engage in cabotage movements without hiring U.S. crews. Take, for example, APL’s Eagle Express X route that stops in Los Angeles, Honolulu, and Dutch Harbor as well as Asian ports. Without the Jones Act, it’s reasonable to believe that ships operating on this route could transport goods from Los Angeles to Honolulu before continuing on to foreign destinations.

But if the Jones Act fails to meet U.S. national security benefits while imposing substantial costs, how to explain its continued survival? Hendrickson avers that the law’s longevity reflects its utility. We believe, however, that the answer lies in the application of public choice theory. Simply put, the law’s costs are both opaque and dispersed across the vast expanse of the U.S. economy while its benefits are highly concentrated. The law’s de facto subsidy is also vastly oversupplied, given that the vast majority of vessels under its protection are smaller craft such as tugboats and barges with trivial value to the country’s sealift capability. This has spawned a lobby aggressively dedicated to the Jones Act’s preservation. Washington, D.C. is home to numerous industry groups and labor organizations that regard the law’s maintenance as critical, but not a single one that views its repeal as a top priority.

It’s instructive in this regard that all four senators from Alaska and Hawaii are strong Jones Act supporters despite their states being disproportionately burdened by the law. This seeming oddity is explained by these states also being disproportionately home to maritime interest groups that support the law. In contrast, Jones Act critics Sen. Mike Lee and the late Sen. John McCain both hailed from land-locked states home to few maritime interest groups.

Disagreements, but also Common Ground

For all of our differences with Hendrickson, however, there is substantial common ground. We are in shared agreement that the Jones Act is suboptimal policy, that its ability to achieve its goals is unclear, and that its U.S.-built requirement is particularly ripe for removal. Where our differences lie is mostly in the scale of gains to be realized from the law’s reform or repeal. As such, there is no reason to maintain the failed status quo. The Jones Act should be repealed and replaced with targeted, transparent, and explicit subsidies to meet the country’s sealift needs. Both the country’s economy and national security would be rewarded—richly so, in our opinion—from such policy change.

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.