Archives For administrative

[This post is the sixth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Thibault Schrepel, Faculty Associate at the Berkman Center at Harvard University and Assistant Professor in European Economic Law at Utrecht University School of Law.]

The pretense of ignorance

Over the last few years, I have published a series of antitrust conversations with Nobel laureates in economics. I have discussed big tech dominance with most of them, and although they have different perspectives, all of them agreed on one thing: they do not know what the effect of breaking up big tech would be. In fact, I have never spoken with any economist who was able to show me convincing empirical evidence that breaking up big tech would on net be good for consumers. The same goes for political scientists; I have never read any article that, taking everything into consideration, proves empirically that breaking up tech companies would be good for protecting democracies, if that is the objective (please note that I am not even discussing the fact that using antitrust law to do that would violate the rule of law, for more on the subject, click here).

This reminds me of Friedrich Hayek’s Nobel memorial lecture, in which he discussed the “pretense of knowledge.” He argued that some issues will always remain too complex for humans (even helped by quantum computers and the most advanced AI; that’s right!). Breaking up big tech is one such issue; it is simply impossible simultaneously to consider the micro and macro-economic impacts of such an enormous undertaking, which would affect, literally, billions of people. Not to mention the political, sociological and legal issues, all of which combined are beyond human understanding.

Ignorance + fear = fame

In the absence of clear-cut conclusions, here is why (I think), some officials are arguing for breaking up big tech. First, it may be possible that some of them actually believe that it would be great. But I am sure we agree that beliefs should not be a valid basis for such actions. More realistically, the answer can be found in the work of another Nobel laureate, James Buchanan, and in particular his 1978 lecture in Vienna entitled “Politics Without Romance.”

In his lecture and the paper that emerged from it, Buchanan argued that while markets fail, so do governments. The latter is especially relevant insofar as top officials entrusted with public power may, occasionally at least, use that power to benefit their personal interests rather than the public interest. Thus, the presumption that government-imposed corrections for market failures always accomplish the desired objectives must be rejected. Taking that into consideration, it follows that the expected effectiveness of public action should always be established as precisely and scientifically as possible before taking action. Integrating these insights from Hayek and Buchanan, we must conclude that it is not possible to know whether the effects of breaking up big tech would on net be positive.

The question then is why, in the absence of positive empirical evidence, are some officials arguing for breaking up tech giants then? Well, because defending such actions may help them achieve their personal goals. Often, it is more important for public officials to show their muscle and take action, rather showing great care about reaching a positive net result for society. This is especially true when it is practically impossible to evaluate the outcome due to the scale and complexity of the changes that ensue. That enables these officials to take credit for being bold while avoiding blame for the harms.

But for such a call to be profitable for the public officials, they first must legitimize the potential action in the eyes of the majority of the public. Until now, most consumers evidently like the services of tech giants, which is why it is crucial for the top officials engaged in such a strategy to demonize those companies and further explain to consumers why they are wrong to enjoy them. Only then does defending the breakup of tech giants becomes politically valuable.

Some data, one trend

In a recent paper entitled “Antitrust Without Romance,” I have analyzed the speeches of the five current FTC commissioners, as well as the speeches of the current and three previous EU Competition Commissioners. What I found is an increasing trend to demonize big tech companies. In other words, public officials increasingly seek to prepare the general public for the idea that breaking up tech giants would be great.

In Europe, current Competition Commissioner Margrethe Vestager has sought to establish an opposition between the people (referred under the pronoun “us”) and tech companies (referred under the pronoun “them”) in more than 80% of her speeches. She further describes these companies as engaging in manipulation of the public and unleashing violence. She says they, “distort or fabricate information, manipulate people’s views and degrade public debate” and help “harmful, untrue information spread faster than ever, unleashing violence and undermining democracy.” Furthermore, she says they cause, “danger of death.” On this basis, she mentions the possibility of breaking them up (for more data about her speeches, see this link).

In the US, we did not observe a similar trend. Assistant Attorney General Makan Delrahim, who has responsibility for antitrust enforcement at the Department of Justice, describes the relationship between people and companies as being in opposition in fewer than 10% of his speeches. The same goes for most of the FTC commissioners (to see all the data about their speeches, see this link). The exceptions are FTC Chairman Joseph J. Simons, who describes companies’ behavior as “bad” from time to time (and underlines that consumers “deserve” better) and Commissioner Rohit Chopra, who describes the relationship between companies and the people as being in opposition to one another in 30% of his speeches. Chopra also frequently labels companies as “bad.” These are minor signs of big tech demonization compared to what is currently done by European officials. But, unfortunately, part of the US doctrine (which does not hide political objectives) pushes for demonizing big tech companies. One may have reason to fear that such a trend will grow in the US as it has in Europe, especially considering the upcoming presidential campaign in which far-right and far-left politicians seem to agree about the need to break up big tech.

And yet, let’s remember that no-one has any documented, tangible, and reproducible evidence that breaking up tech giants would be good for consumers, or societies at large, or, in fact, for anyone (even dolphins, okay). It might be a good idea; it might be a bad idea. Who knows? But the lack of evidence either way militates against taking such action. Meanwhile, there is strong evidence that these discussions are fueled by a handful of individuals wishing to benefit from such a call for action. They do so, first, by depicting tech giants as representing the new elite in opposition to the people and they then portray themselves as the only saviors capable of taking action.

Epilogue: who knows, life is not a Tarantino movie

For the last 30 years, antitrust law has been largely immune to strategic takeover by political interests. It may now be returning to a previous era in which it was the instrument of a few. This transformation is already happening in Europe (it is expected to hit case law there quite soon) and is getting real in the US, where groups display political goals and make antitrust law a Trojan horse for their personal interests.The only semblance of evidence they bring is a few allegedly harmful micro-practices (see Amazon’s Antitrust Paradox), which they use as a basis for defending the urgent need of macro, structural measures, such as breaking up tech companies. This is disproportionate, but most of all and in the absence of better knowledge, purely opportunistic and potentially foolish. Who knows at this point whether antitrust law will come out intact of this populist and moralist episode? And who knows what the next idea of those who want to use antitrust law for purely political purposes will be. Life is not a Tarantino movie; it may end up badly.

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.

Treasury Secretary Steve Mnuchin recently claimed that Amazon has “destroyed the retail industry across the United States” and should be investigated for antitrust violations. The claim doesn’t pass the laugh test. What’s more, the allegation might more rightly be levelled at Mnuchin himself. 

Mnuchin. Is. Wrong.

First, while Amazon’s share of online retail in the U.S. is around 38 percent, that still only represents around 4 percent of total retail sales. It is unclear how Mnuchin imagines a company with a market share of 4 percent can have “destroyed” its competitors.

Second, nearly 60 percent of Amazon’s sales come from third party vendors — i.e. other retailers — many of whom would not exist but for Amazon’s platform. So, far from destroying U.S. retail, Amazon arguably has enabled U.S. online retail to thrive.

Third, even many of the brick-and-mortar retailers allegedly destroyed by Amazon have likely actually benefited from its innovative, cost-cutting approaches, which have reduced the cost of inputs. For example, in its Business Prime Program, Amazon offers discounts on a large array of goods, as well as incentives for bulk purchases, and flexible financing offers. Along with those direct savings, it also allows small businesses to use its analytics capabilities to track and manage the supply chain inputs they purchase through Amazon.

It’s no doubt true that many retailers are unhappy about the price-cutting and retail price visibility that Amazon (and many other online retailers) offer to consumers. But, fortunately, online competition is a fact that will not go away even if Amazon does. Meanwhile, investigating Amazon for antitrust violations — presumably with the objective of imposing some structural remedy? — would harm a truly great American innovator. And to what end? To protect inefficient, overpriced retailers? 

Indeed, the better response, for retailers, is not to gripe about Amazon but to invest in better ways to serve consumers in order more effectively to compete. And that’s what many retailers are doing: Walmart, Target and Kroger are investing billions to improve both their brick-and-mortar retail businesses and their online businesses. As a result, each of them still sell more, individually, than Amazon

In fact, Walmart has about 23% of grocery retail sales. By Mnuchin’s logic, Walmart must be destroying the grocery industry too. 

The real destroyer of retail

It is ironic that Steve Mnuchin should claim that Amazon has “destroyed” U.S. retail, given his support for the administration’s tariff policy, which is actually severely harming U.S. retailers. In the apparel industry, “[b]usinesses have barely been able to survive the 10 percent tariff. [The administration’s proposed] 25 percent is not survivable.” Low-margin retailers like Dollar Tree suffered punishing hits to stock value in the wake of the tariff announcements. And small producers and retailers would face, at best, dramatic income losses and, at worst, the need to fold up in the face of the current proposals. 

So, if Mr Mnuchin is actually concerned about the state of U.S. retail, perhaps he should try to persuade his boss to stop with the tariff war instead of attacking a great American retailer.

The Department of Justice announced it has approved the $26 billion T-Mobile/Sprint merger. Once completed, the deal will create a mobile carrier with around 136 million customers in the U.S., putting it just behind Verizon (158 million) and AT&T (156 million).

While all the relevant federal government agencies have now approved the merger, it still faces a legal challenge from state attorneys general. At the very least, this challenge is likely to delay the merger; if successful, it could scupper it. In this blog post, we evaluate the state AG’s claims (and find them wanting).

Four firms good, three firms bad?

The state AG’s opposition to the T-Mobile/Sprint merger is based on a claim that a competitive mobile market requires four national providers, as articulated in their redacted complaint:

The Big Four MNOs [mobile network operators] compete on many dimensions, including price, network quality, network coverage, and features. The aggressive competition between them has resulted in falling prices and improved quality. The competition that currently takes place across those dimensions, and others, among the Big Four MNOs would be negatively impacted if the Merger were consummated. The effects of the harm to competition on consumers will be significant because the Big Four MNOs have wireless service revenues of more than $160 billion.

. . . 

Market consolidation from four to three MNOs would also serve to increase the possibility of tacit collusion in the markets for retail mobile wireless telecommunications services.

But there are no economic grounds for the assertion that a four firm industry is on a competitive tipping point. Four is an arbitrary number, offered up in order to squelch any further concentration in the industry.

A proper assessment of this transaction—as well as any other telecom merger—requires accounting for the specific characteristics of the markets affected by the merger. The accounting would include, most importantly, the dynamic, fast-moving nature of competition and the key role played by high fixed costs of production and economies of scale. This is especially important given the expectation that the merger will facilitate the launch of a competitive, national 5G network.

Opponents claim this merger takes us from four to three national carriers. But Sprint was never a serious participant in the launch of 5G. Thus, in terms of future investment in general, and the roll-out of 5G in particular, a better characterization is that it this deal takes the U.S. from two to three national carriers investing to build out next-generation networks.

In the past, the capital expenditures made by AT&T and Verizon have dwarfed those of T-Mobile and Sprint. But a combined T-Mobile/Sprint would be in a far better position to make the kinds of large-scale investments necessary to develop a nationwide 5G network. As a result, it is likely that both the urban-rural digital divide and the rich-poor digital divide will decline following the merger. And this investment will drive competition with AT&T and Verizon, leading to innovation, improving service and–over time–lowering the cost of access.

Is prepaid a separate market?

The state AGs complain that the merger would disproportionately affect consumers of prepaid plans, which they claim constitutes a separate product market:

There are differences between prepaid and postpaid service, the most notable being that individuals who cannot pass a credit check and/or who do not have a history of bill payment with a MNO may not be eligible for postpaid service. Accordingly, it is informative to look at prepaid mobile wireless telecommunications services as a separate segment of the market for mobile wireless telecommunications services.

Claims that prepaid services constitute a separate market are questionable, at best. While at one time there might have been a fairly distinct divide between pre and postpaid markets, today the line between them is at least blurry, and may not even be a meaningful divide at all.

To begin with, the arguments regarding any expected monopolization in the prepaid market appear to assume that the postpaid market imposes no competitive constraint on the prepaid market. 

But that can’t literally be true. At the very least, postpaid plans put a ceiling on prepaid prices for many prepaid users. To be sure, there are some prepaid consumers who don’t have the credit history required to participate in the postpaid market at all. But these are inframarginal consumers, and they will benefit from the extent of competition at the margins unless operators can effectively price discriminate in ways they have not in the past, and which has not been demonstrated is possible or likely.

One source of this competition will come from Dish, which has been a vocal critic of the T-Mobile/Sprint merger. Under the deal with DOJ, T-Mobile and Sprint must spin-off Sprint’s prepaid businesses to Dish. The divested products include Boost Mobile, Virgin Mobile, and Sprint prepaid. Moreover the deal requires Dish be allowed to use T-Mobile’s network during a seven-year transition period. 

Will the merger harm low-income consumers?

While the states’ complaint alleges that low-income consumers will suffer, it pays little attention to the so-called “digital divide” separating urban and rural consumers. This seems curious given the attention it was given in submissions to the federal agencies. For example, the Communication Workers of America opined:

the data in the Applicants’ Public Interest Statement demonstrates that even six years after a T-Mobile/Sprint merger, “most of New T-Mobile’s rural customers would be forced to settle for a service that has significantly lower performance than the urban and suburban parts of the network.” The “digital divide” is likely to worsen, not improve, post-merger.

This is merely an assertion, and a misleading assertion. To the extent the “digital divide” would grow following the merger, it would be because urban access will improve more rapidly than rural access would improve. 

Indeed, there is no real suggestion that the merger will impede rural access relative to a world in which T-Mobile and Sprint do not merge. 

And yet, in the absence of a merger, Sprint would be less able to utilize its own spectrum in rural areas than would the merged T-Mobile/Sprint, because utilization of that spectrum would require substantial investment in new infrastructure and additional, different spectrum. And much of that infrastructure and spectrum is already owned by T-Mobile. 

It likely that the combined T-Mobile/Sprint will make that investment, given the cost savings that are expected to be realized through the merger. So, while it might be true that urban customers will benefit more from the merger, rural customers will also benefit. It is impossible to know, of course, by exactly how much each group will benefit. But, prima facie, the prospect of improvement in rural access seems a strong argument in favor of the merger from a public interest standpoint.

The merger is also likely to reduce another digital divide: that between wealthier and poorer consumers in more urban areas. The proportion of U.S. households with access to the Internet has for several years been rising faster among those with lower incomes than those with higher incomes, thereby narrowing this divide. Since 2011, access by households earning $25,000 or less has risen from 52% to 62%, while access among the U.S. population as a whole has risen only from 72% to 78%. In part, this has likely resulted from increased mobile access (a greater proportion of Americans now access the Internet from mobile devices than from laptops), which in turn is the result of widely available, low-cost smartphones and the declining cost of mobile data.

Concluding remarks

By enabling the creation of a true, third national mobile (phone and data) network, the merger will almost certainly drive competition and innovation that will lead to better services at lower prices, thereby expanding access for all and, if current trends hold, especially those on lower incomes. Beyond its effect on the “digital divide” per se, the merger is likely to have broadly positive effects on access more generally.

There’s always a reason to block a merger:

  • If a firm is too big, it will be because it is “a merger for monopoly”;
  • If the firms aren’t that big, it will be for “coordinated effects”;
  • If a firm is small, then it will be because it will “eliminate a maverick”.

It’s a version of Ronald Coase’s complaint about antitrust, as related by William Landes:

Ronald said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down, they said it was predatory pricing, and when they stayed the same, they said it was tacit collusion.

Of all the reasons to block a merger, the maverick notion is the weakest, and it’s well past time to ditch it.

The Horizontal Merger Guidelines define a “maverick” as “a firm that plays a disruptive role in the market to the benefit of customers.” According to the Guidelines, this includes firms:

  1. With a new technology or business model that threatens to disrupt market conditions;
  2. With an incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices;
  3. That resist otherwise prevailing industry norms to cooperate on price setting or other terms of competition; and/or
  4. With an ability and incentive to expand production rapidly using available capacity to “discipline prices.”

There appears to be no formal model of maverick behavior that does not rely on some a priori assumption that the firm is a maverick.

For example, John Kwoka’s 1989 model assumes the maverick firm has different beliefs about how competing firms would react if the maverick varies its output or price. Louis Kaplow and Carl Shapiro developed a simple model in which the firm with the smallest market share may play the role of a maverick. They note, however, that this raises the question—in a model in which every firm faces the same cost and demand conditions—why would there be any variation in market shares? The common solution, according to Kaplow and Shapiro, is cost asymmetries among firms. If that is the case, then “maverick” activity is merely a function of cost, rather than some uniquely maverick-like behavior.

The idea of the maverick firm requires that the firm play a critical role in the market. The maverick must be the firm that outflanks coordinated action or acts as a bulwark against unilateral action. By this loosey goosey definition of maverick, a single firm can make the difference between success or failure of anticompetitive behavior by its competitors. Thus, the ability and incentive to expand production rapidly is a necessary condition for a firm to be considered a maverick. For example, Kaplow and Shapiro explain:

Of particular note is the temptation of one relatively small firm to decline to participate in the collusive arrangement or secretly to cut prices to serve, say, 4% rather than 2% of the market. As long as price cuts by a small firm are less likely to be accurately observed or inferred by the other firms than are price cuts by larger firms, the presence of small firms that are capable of expanding significantly is especially disruptive to effective collusion.

A “maverick” firm’s ability to “discipline prices” depends crucially on its ability to expand output in the face of increased demand for its products. Similarly, the other non-maverick firms can be “disciplined” by the maverick only in the face of a credible threat of (1) a noticeable drop in market share that (2) leads to lower profits.

The government’s complaint in AT&T/T-Mobile’s 2011 proposed merger alleges:

Relying on its disruptive pricing plans, its improved high-speed HSPA+ network, and a variety of other initiatives, T-Mobile aimed to grow its nationwide share to 17 percent within the next several years, and to substantially increase its presence in the enterprise and government market. AT&T’s acquisition of T-Mobile would eliminate the important price, quality, product variety, and innovation competition that an independent T-Mobile brings to the marketplace.

At the time of the proposed merger, T-Mobile accounted for 11% of U.S. wireless subscribers. At the end of 2016, its market share had hit 17%. About half of the increase can be attributed to its 2012 merger with MetroPCS. Over the same period, Verizon’s market share increased from 33% to 35% and AT&T market share remained stable at 32%. It appears that T-Mobile’s so-called maverick behavior did more to disrupt the market shares of smaller competitors Sprint and Leap (which was acquired by AT&T). Thus, it is not clear, ex post, that T-Mobile posed any threat to AT&T or Verizon’s market shares.

Geoffrey Manne raised some questions about the government’s maverick theory which also highlights a fundamental problem with the willy nilly way in which firms are given the maverick label:

. . . it’s just not enough that a firm may be offering products at a lower price—there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.

While T-Mobile had a reputation for lower mobile prices, in 2011, the firm was lagging behind Verizon, Sprint, and AT&T in the rollout of 4G technology. In other words, T-Mobile was offering an inferior product at a lower price. That’s not a maverick, that’s product differentiation with hedonic pricing.

More recently, in his opposition to the proposed T-Mobile/Sprint merger, Gene Kimmelman from Public Knowledge asserts that both firms are mavericks and their combination would cause their maverick magic to disappear:

Sprint, also, can be seen as a maverick. It has offered “unlimited” plans and simplified its rate plans, for instance, driving the rest of the industry forward to more consumer-friendly options. As Sprint CEO Marcelo Claure stated, “Sprint and T-Mobile have similar DNA and have eliminated confusing rate plans, converging into one rate plan: Unlimited.” Whether both or just one of the companies can be seen as a “maverick” today, in either case the newly combined company would simply have the same structural incentives as the larger carriers both Sprint and T-Mobile today work so hard to differentiate themselves from.

Kimmelman provides no mechanism by which the magic would go missing, but instead offers a version of an adversity-builds-character argument:

Allowing T-Mobile to grow to approximately the same size as AT&T, rather than forcing it to fight for customers, will eliminate the combined company’s need to disrupt the market and create an incentive to maintain the existing market structure.

For 30 years, the notion of the maverick firm has been a concept in search of a model. If the concept cannot be modeled decades after being introduced, maybe the maverick can’t be modeled.

What’s left are ad hoc assertions mixed with speculative projections in hopes that some sympathetic judge can be swayed. However, some judges seem to be more skeptical than sympathetic, as in H&R Block/TaxACT :

The parties have spilled substantial ink debating TaxACT’s maverick status. The arguments over whether TaxACT is or is not a “maverick” — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis. The government even put forward as supposed evidence a TaxACT promotional press release in which the company described itself as a “maverick.” This type of evidence amounts to little more than a game of semantic gotcha. Here, the record is clear that while TaxACT has been an aggressive and innovative competitor in the market, as defendants admit, TaxACT is not unique in this role. Other competitors, including HRB and Intuit, have also been aggressive and innovative in forcing companies in the DDIY market to respond to new product offerings to the benefit of consumers.

It’s time to send the maverick out of town and into the sunset.

 

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here

[This post is the fourth in an ongoing symposium on “Should We Break Up Big Tech?“that features analysis and opinion from various perspectives.]

[This post is authored by Pallavi Guniganti, editor of Global Competition Review.]

Start with the assumption that there is a problem

The European Commission and Austria’s Federal Competition Authority are investigating Amazon over its use of Marketplace sellers’ data. US senator Elizabeth Warren has said that one reason to require “large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” is to prevent them from using data they obtain from third parties on the platform to benefit their own participation on the platform.

Amazon tweeted in response to Warren: “We don’t use individual sellers’ data to launch private label products.” However, an Amazon spokeswoman would not answer questions about whether it uses aggregated non-public data about sellers, or data from buyers; and whether any formal firewall prevents Amazon’s retail operation from accessing Marketplace data.

If the problem is solely that Amazon’s own retail operation can access data from the Marketplace, structurally breaking up the company and forbidding it and other platforms from participating on those platforms may be a far more extensive intervention than is needed. A targeted response such as a firewall could remedy the specific competitive harm.

Germany’s Federal Cartel Office implicitly recognised this with its Facebook decision, which did not demand the divestiture of every business beyond the core social network – the “Mark Zuckerberg Production” that began in 2004. Instead, the competition authority prohibited Facebook from conditioning the use of that social network on consent to the collection and combination of data from WhatsApp, Oculus, Masquerade, Instagram and any other sites or apps where Facebook might track them.

The decision does not limit data collection on Facebook itself. “It is taken into account that an advertising-funded social network generally needs to process a large amount of personal data,” the authority said. “However, the Bundeskartellamt holds that the efficiencies in a business model based on personalised advertising do not outweigh the interests of the users when it comes to processing data from sources outside of the social network.”

The Federal Cartel Office thus aims to wall off the data collected on Facebook from data that can be collected anywhere else. It ordered Facebook to present a road map for how it would implement these changes within four months of the February 2019 decision, but the time limit was suspended by the company’s emergency appeal to the Düsseldorf Higher Regional Court.

Federal Cartel Office president Andreas Mundt has described the kind of remedy he had ordered for Facebook as not exactly structural, but going in a “structural direction” that might work for other cases as well. Keeping the data apart is a way to “break up this market power” without literally breaking up the corporation, and the first step to an “internal divestiture”, he said.

Mundt claimed that this kind of remedy gets to “the core of the problem”: big internet companies being able to out-compete new entrants, because the former can obtain and process data even beyond what they collected on a single service that has attracted a large number of users.

He used terms like “silo” rather than “firewall”, but the essential idea is to protect competition by preventing the dissemination of certain information. Antitrust authorities worldwide have considered firewalls, particularly in vertical merger remedies, as a way to prevent the anticompetitive movement of data while still allowing for some efficiencies of business units being under the same corporate umbrella.

Notwithstanding Mundt’s reference to a “structural direction”, competition authorities including his own have traditionally classified firewalls as a behavioural or conduct remedy. They purport to solve a specific problem: the movement of information.

Other aspects of big companies that can give them an advantage – such as the use of profits from one part of a company to invest in another part, perhaps to undercut rivals on prices – would not be addressed by firewalls. They would more likely would require dividing up a company at the corporate level.

But if data are the central concern, then the way forward might be found in firewalls.

What do the enforcers say?

Germany

The Federal Cartel Office’s May 2017 guidance on merger remedies disfavours firewalls, stating that such obligations are “not suitable to remedy competitive harm” because they require continuous oversight. Employees of a corporation in almost any sector will commonly exchange information on a daily basis in almost every industry, making it “extremely difficult to identify, stop and prevent non-compliance with the firewall obligations”, the guidance states. In a footnote, it acknowledges that other, unspecified jurisdictions have regarded firewalls “as an effective remedy to remove competition concerns”.

UK

The UK’s Competition and Markets Authority takes a more optimistic view of the ability to keep a firewall in place, at least in the context of a vertical integration to prevent the use of “privileged information generated by competitors’ use of the merged company’s facilities or products”. In addition to setting up the company to restrict information flows, staff interactions and the sharing of services, physical premises and management, the CMA also requires the commitment of “significant resources to educating staff about the requirements of the measures and supporting the measures with disciplinary procedures and independent monitoring”. 

EU

The European Commission’s merger remedies notice is quite short. It does not mention firewalls or Chinese walls by name, simply noting that any non-structural remedy is problematic “due to the absence of effective monitoring of its implementation” by the commission or even other market participants. A 2011 European Commission submission to the Organisation for Economic Co-operation and Development was gloomier: “We have also found that firewalls are virtually impossible to monitor.”

US DOJ

The US antitrust agencies have been inconsistent in their views, and not on a consistent partisan basis. Under George W Bush, the Department of Justice’s antitrust division’s 2004 merger guidance said “a properly designed and enforced firewall” could prevent certain competition harms. But it also would require the DOJ and courts to expend “considerable time and effort” on monitoring, and “may frequently destroy the very efficiency that the merger was designed to generate. For these reasons, the use of firewalls in division decrees is the exception and not the rule.”

 Under Barack Obama, the Antitrust Division revised its guidance in 2011 to omit the most sceptical language about firewalls, replacing it with a single sentence about the need for effective monitoring. Under Donald Trump, the Antitrust Division has withdrawn the 2011 guidance, and the 2004 guidance is operative.

US FTC

At the Federal Trade Commission, on the other hand, firewalls had long been relatively uncontroversial among both Republicans and Democrats. For example, the commissioners unanimously agreed to a firewall remedy for PepsiCo’s and Coca-Cola’s separate 2010 acquisitions of bottlers and distributors that also dealt with rival a rival beverage maker, the Dr Pepper Snapple Group. (The FTC later emphasised the importance in those cases of obtaining industry expert monitors, who “have provided commission staff with invaluable insight and evaluation regarding each company’s compliance with the commission’s orders”.)

In 2017, the two commissioners who remained from the Obama administration both signed off on the Broadcom/Brocade merger based on a firewall – as did the European Commission, which also mandated interoperability commitments. And the Democratic commissioners appointed by President Trump voted with their Republican colleagues in 2018 to clear the Northrop Grumman/Orbital ATK deal subject to a behavioural remedy that included supply commitments and firewalls.

Several months later, however, those Democrats dissented from the FTC’s approval of Staples/Essendant, which the agency conditioned solely on a firewall between Essendant’s wholesale business and the Staples unit that handles corporate sales. While a firewall to prevent Staples from exploiting Essendant’s commercially-sensitive data about Staples’ rivals “will reduce the chance of misuse of data, it does not eliminate it,” Commissioner Rohit Chopra said. He emphasised the difficulty of policing oral communications, and said the FTC instead could have required Essendant to return its customers’ data. Commissioner Rebecca Kelly Slaughter said she shared Chopra’s “concerns about the efficacy of the firewall to remedy the information sharing harm”.

The majority defended firewalls’ effectiveness, noting that it had used them solve competition concerns in past vertical mergers, “and the integrity of those firewalls was robust.” The Republican commissioners cited the FTC’s review of the merger remedies it had imposed from 2006 to 2012, which concluded: “All vertical merger orders were judged successful.”

Republican commissioner Christine Wilson wrote separately about the importance of choosing “a remedy that is narrowly tailored to address the likely competitive harms without doing collateral damage.” Certain behavioural remedies for past vertical mergers had gone too far and even resulted in less competition, she said. “I have substantially fewer qualms about long-standing and less invasive tools, such as the ‘firewalls, fair dealing, and transparency provisions’ the Antitrust Division endorsed in the 2004 edition of its Policy Guide.”

Why firewalls don’t work, especially for big tech

Firewalls are designed to prevent the anticompetitive harm of information exchange, but whether they work depends on whether the companies and their employees behave themselves – and if they do not, on whether antitrust enforcers can know it and prove it. Deputy assistant attorney general Barry Nigro at the Antitrust Division has questioned the effectiveness of firewalls as a remedy for deals where the relevant business units are operationally close. The same problem may arise outside the merger context.

For example, Amazon’s investment fund for products to complement its Alexa virtual assistant could be seen as having the kind of firewall that is undercut by the practicalities of how a business operates. CNBC reported in September 2017 that “Alexa Fund representatives called a handful of its portfolio companies to say a clear ‘firewall’ exists between the Alexa Fund and Amazon’s product development teams.” The chief executive from Nucleus, one of those portfolio companies, had complained that Amazon’s Echo Show was a copycat of Nucleus’s product. While Amazon claimed that the Alexa Fund has “measures” to ensure “appropriate treatment” of confidential information, the companies said the process of obtaining the fund’s investment required them to work closely with Amazon’s product teams.

CNBC contrasted with Intel Capital – a division of the technology company that manages venture capital and investment – where a former managing director said he and his colleagues “tried to be extra careful not to let trade secrets flow across the firewall into its parent company”.

Firewalls are commonplace to corporate lawyers, who instill temporary blocks to prevent transmission of information in a variety of situations, such as during due diligence on a deal. This experience may lead such attorneys to put more faith in firewalls than enforcement advocates do.

Diana Moss, the president of the American Antitrust Institute, says that like other behavioral remedies, firewalls “don’t change any incentive to exercise market power”. In contrast, structural remedies eliminate that incentive by removing the part of the business that would make the exercise of market power profitable.

No internal monitoring or compliance ensures the firewall is respected, Moss says, unless a government consent order installs a monitor in a company to make sure the business units aren’t sharing information. This would be unlikely to occur, she says.

Moss’s 2011 white paper on behavioural merger remedies, co-authored with John Kwoka, reviews how well such remedies have worked. It notes that “information firewalls in Google-ITA and Comcast-NBCU clearly impede the joint operation and coordination of business divisions that would otherwise naturally occur.” 

Lina Khan’s 2019 Columbia Law Review article, “The Separation of Platforms and Commerce,” repeatedly cites Moss and Kwoka in the course of arguing that non-separation solutions such as firewalls do not work.

Khan concedes that information firewalls “in theory could help prevent information appropriation by dominant integrated firms.” But regulating the dissemination of information is especially difficult “in multibillion dollar markets built around the intricate collection, combination, and sale of data”, as companies in those markets “will have an even greater incentive to combine different sets of information”.

Why firewalls might work, especially for big tech

Yet neither Khan nor Moss points to an example of a firewall that clearly did not work. Khan writes: “Whether the [Google-ITA] information firewall was successful in preventing Google from accessing rivals’ business information is not publicly known. A year after the remedy expired, Google shut down” the application programming interface, through which ITA had provided its customisable flight search engine.

Even as enforcement advocates throw doubt on firewalls, enforcers keep requiring them. China’s Ministry of Commerce even used them to remedy a horizontal merger, in two stages of its conditions on Western Digital’s acquisition of Hitachi’s hard disk drive.

If German courts allow Andreas Mundt’s remedy for Facebook to go into effect, it will provide an example of just how effective a firewall can be on a platform. The decision requires Facebook to detail its technical plan to implement the obligation not to share data on users from its subsidiaries and its tracking on independent websites and apps.

A section of the “frequently asked questions” about the Federal Cartel Office’s Facebook case includes: “How can the Bundeskartellamt enforce the implementation of its decision?” The authority can impose fines for known non-compliance, but that assume it could detect violations of its order. Somewhat tentatively, the agency says it could carry out random monitoring, which is “possible in principle… as the actual flow of data eg from websites to Facebook can be monitored by analysing websites and their components or by recording signals.”

As perhaps befits the digital difference between Staples and Facebook, the German authority posits monitoring that would not be able to catch the kind of “oral communications” that Commissioner Chopra worried about when the US FTC cleared Staples’ acquisition of Essendant. But the use of such high-monitors could make firewalls even more appropriate as a remedy for platforms – which look to large data flows for a competitive advantage – than for old economy sales teams that could harm competition with just a few minutes of conversation.

Rather than a human monitor installed in a company to guard against firewall breaches, which Moss said was unlikely, software installed on employee computers and email systems might detect data flows between business units that should be walled off from each other. Breakups and firewalls are both longstanding remedies, but the latter may be more amenable to the kind of solutions that “big tech” itself has provided.

[TOTM: The following is the seventh in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.]

This post is authored by Gerard Lloblet, Professor of Economics at CEMFI, and Jorge Padilla, Senior Managing Director at Compass Lexecon. Both have advised SEP holders, and to a lesser extent licensees, in royalty negotiations and antitrust disputes.]

Over the last few years competition authorities in the US and elsewhere have repeatedly warned about the risk of patent hold-up in the licensing of Standard Essential Patents (SEPs). Concerns about such risks were front and center in the recent FTC case against Qualcomm, where the Court ultimately concluded that Qualcomm had used a series of anticompetitive practices to extract unreasonable royalties from implementers. This post evaluates the evidence for such a risk, as well as the countervailing risk of patent hold-out.

In general, hold up may arise when firms negotiate trading terms after they have made costly, relation-specific investments. Since the costs of these investments are sunk when trading terms are negotiated, they are not factored into the agreed terms. As a result, depending on the relative bargaining power of the firms, the investments made by the weaker party may be undercompensated (Williamson, 1979). 

In the context of SEPs, patent hold-up would arise if SEP owners were able to take advantage of the essentiality of their patents to charge excessive royalties to manufacturers of products reading on those patents that made irreversible investments in the standard (see Lemley and Shapiro (2007)). Similarly, in the recent FTC v. Qualcomm ruling, trial judge Lucy Koh concluded that firms may also use commercial strategies (in this case, Qualcomm’s “no license, no chips” policy, refusing to deal with certain parties and demanding exclusivity from others) to extract royalties that depart from the FRAND benchmark.

After years of heated debate, however, there is no consensus about whether patent hold-up actually exists. Some argue that there is no evidence of hold-up in practice. If patent hold-up were a significant problem, manufacturers would anticipate that their investments would be expropriated and would thus decide not to invest in the first place. But end-product manufacturers have invested considerable amounts in standardized technologies (Galetovic et al, 2015). Others claim that while investment is indeed observed, actual investment levels are “necessarily” below those that would be observed in the absence of hold-up. They allege that, since that counterfactual scenario is not observable, it is not surprising that more than fifteen years after the patent hold-up hypothesis was first proposed, empirical evidence of its existence is lacking.

Meanwhile, innovators are concerned about a risk in the opposite direction, the risk of patent hold-out. As Epstein and Noroozi (2018) explain,

By “patent holdout” we mean the converse problem, i.e., that an implementer refuses to negotiate in good faith with an innovator for a license to valid patent(s) that the implementer infringes, and instead forces the innovator to either undertake significant litigation costs and time delays to extract a licensing payment through court order, or else to simply drop the matter because the licensing game is no longer worth the candle.

Patent hold-out, also known as “efficient infringement,” is especially relevant in the standardization context for two reasons. First, SEP owners are oftentimes required to license their patents under Fair, Reasonable and Non-Discriminatory (FRAND) conditions. Particularly when, as occurs in some jurisdictions, innovators are not allowed to request an injunction, they have little or no leverage in trying to require licensees to accept a licensing deal. Secondly, SEP owners typically possess many complementary patents and, therefore, seek to license their portfolio of SEPs at once, since that minimizes transaction costs. Yet, some manufacturers de facto refuse to negotiate in this way and choose to challenge the validity of the SEP portfolio patent-by-patent and/or jurisdiction-by-jurisdiction. This strategy involves large litigation costs and is therefore inefficient. SEP holders claim that this practice is anticompetitive and it also leads to royalties that are too low.

While the concerns of SEP holders seem to have attracted the attention of the leadership of the US DOJ (see, for example, here), some authors have dismissed them as theoretically groundless, empirically immaterial and irrelevant from an antitrust perspective (see here). 

Evidence of patent hold-out from litigation

In an ongoing work, Llobet and Padilla (forthcoming), we analyze the effects of the sequential litigation strategy adopted by some manufacturers and compare its consequences with the simultaneous litigation of the whole portfolio. We show that sequential litigation results in lower royalty payments than simultaneous litigation and may result in under-compensation of innovation and the dissipation of social surplus when litigation costs are high.

The model relies on two basic and realistic assumptions. First, in sequential lawsuits, the result of a trial affects the probability that each party wins the following one. That is, if the manufacturer wins the first trial, it has a higher probability of winning the second, as a first victory may uncover information about the validity of other patents that relate to the same type of innovation, which will be less likely to be upheld in court. Second, the impact of a validity challenge on royalty payments is asymmetric: they are reduced to zero if the patent is found to be invalid but are not increased if it is found valid (and infringed).

Our results indicate that these features of the legal system can be strategically used by the manufacturer. The intuition is as follows. Suppose that the innovator sets a royalty rate for each patent for which, in the simultaneous trial case, the manufacturer would be indifferent between settling and litigating. Under sequential litigation, however, the manufacturer might be willing to challenge a patent because of the gain in a future trial. This is due to the asymmetric effects that winning or losing the second trial has on the royalty rate that this firm will have to pay. In particular, if the manufacturer wins the first trial, so that the first patent is invalidated, its probability of winning the second one increases, which means that the innovator is likely to settle for a lower royalty rate for the second patent or see both patents invalidated in court. In the opposite case, if the innovator wins the first trial, so that the second is also likely to be unfavorable to the manufacturer, the latter always has the option to pay up the original royalty rate and avoid the second trial. In other words, the possibility for the manufacturer to negotiate the royalty rate downwards after a victory, without the risk of it being increased in case of a defeat, fosters sequential litigation and results in lower royalties than the simultaneous litigation of all patents would produce. 

This mechanism, while being applicable to any portfolio that includes patents the validity of which is related, becomes more significant in the context of SEPs for two reasons. The first is the difficulty of innovators to adjust their royalties upwards after the first successful trial, as it might be considered a breach of their FRAND commitments. The second is that, following recent competition law litigation in the EU and other jurisdictions, SEP owners are restricted in their ability to seek (preliminary) injunctions even in the case of willful infringement. Our analysis demonstrates that the threat of injunction mitigates, though it is unlikely to eliminate completely, the incentive to litigate sequentially and, therefore, excessively (i.e. even when such litigation reduces social welfare).

We also find a second motivation for excessive litigation: business stealing. Manufacturers litigate excessively in order to avoid payment and thus achieve a valuable cost advantage over their competitors. They prefer to litigate even when litigation costs are so large that it would be preferable for society to avoid litigation because their royalty burden is reduced both in absolute terms and relative to the royalty burden for its rivals (while it does not go up if the patents are found valid). This business stealing incentive will result in the under-compensation of innovators, as above, but importantly it may also result in the anticompetitive foreclosure of more efficient competitors.

Consider, for example, a scenario in which a large firm with the ability to fund protracted litigation efforts competes in a downstream market with a competitive fringe, comprising small firms for which litigation is not an option. In this scenario, the large manufacturer may choose to litigate to force the innovator to settle on a low royalty. The large manufacturer exploits the asymmetry with its defenseless small rivals to reduce its IP costs. In some jurisdictions it may also exploit yet another asymmetry in the legal system to achieve an even larger cost advantage. If both the large manufacturer and the innovator choose to litigate and the former wins, the patent is invalidated, and the large manufacturer avoids paying royalties altogether. Whether this confers a comparative advantage on the large manufacturer depends on whether the invalidation results in the immediate termination of all other existing licenses or not.

Our work thus shows that patent hold-out concerns are both theoretically cogent and have non-trivial antitrust implications. Whether such concerns merit intervention is an empirical matter. While reviewing that evidence is outside the scope of our work, our own litigation experience suggests that patent hold-out should be taken seriously.

[TOTM: The following is the sixth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.

This post is authored by Jonathan M. Barnett, Torrey H. Webb Professor of Law at the University of Southern California Gould School of Law.]

There is little doubt that the decision in May 2019 by the Northern District of California in FTC v. Qualcomm is of historical importance. Unless reversed or modified on appeal, the decision would require that the lead innovator behind 3G and 4G smartphone technology renegotiate hundreds of existing licenses with device producers and offer new licenses to any interested chipmakers.

The court’s sweeping order caps off a global campaign by implementers to re-engineer the property-rights infrastructure of the wireless markets. Those efforts have deployed the instruments of antitrust and patent law to override existing licensing arrangements and thereby reduce the input costs borne by device producers in the downstream market. This has occurred both directly, in arguments made by those firms in antitrust and patent litigation or through the filing of amicus briefs, or indirectly by advocating that regulators bring antitrust actions against IP licensors.

Whether or not FTC v. Qualcomm is correctly decided largely depends on whether or not downstream firms’ interest in minimizing the costs of obtaining technology inputs from upstream R&D specialists aligns with the public interest in preserving dynamically efficient innovation markets. As I discuss below, there are three reasons to believe those interests are not aligned in this case. If so, the court’s order would simply engineer a wealth transfer from firms that have led innovation in wireless markets to producers that have borne few of the costs and risks involved in doing so. Members of the former group each exhibits R&D intensities (R&D expenditures as a percentage of sales) in the high teens to low twenties; the latter, approximately five percent. Of greater concern, the court’s upending of long-established licensing arrangements endangers business models that monetize R&D by licensing technology to a large pool of device producers (see Qualcomm), rather than earning returns through self-contained hardware and software ecosystems (see Apple). There is no apparent antitrust rationale for picking and choosing among these business models in innovation markets.

Reason #1: FRAND is a Two-Sided Deal

To fully appreciate the recent litigations involving the FTC and Apple on the one hand, and Qualcomm on the other hand, it is necessary to return to the origins of modern wireless markets.

Starting in the late 1980s, various firms were engaged in the launch of the GSM wireless network in Western Europe. At that time, each European telecom market typically consisted of a national monopoly carrier and a favored group of local equipment suppliers. The GSM project, which envisioned a trans-national wireless communications market, challenged this model. In particular, the national carrier and equipment monopolies were threatened by the fact that the GSM standard relied in part on patented technology held by an outside innovator—namely, Motorola. As I describe in a forthcoming publication, the “FRAND” (fair, reasonable and nondiscriminatory) principles that today govern the licensing of standard-essential patents in wireless markets emerged from a negotiation between, on the one hand, carriers and producers who sought a royalty cap and, on the other hand, a technology innovator that sought to preserve its licensing freedom going forward.

This negotiation history is important. Any informed discussion of the meaning of FRAND must recognize that this principle was adopted as something akin to a “good faith” contractual term designed to promote two objectives:

  1. Protect downstream adopters from holdup tactics by upstream innovators; and
  2. enable upstream innovators to enjoy an appreciable portion of the value generated by sales in the consumer market.

Any interpretation of FRAND that does not meet these conditions will induce upstream firms to reduce R&D investment, limit participation in standard-setting activities, or vertically integrate forward to capture directly a return on R&D dollars.

Reason #2: No Evidence of Actual Harm

In the December 2018 appellate court proceedings in which the Department of Justice unsuccessfully challenged the AT&T/Time-Warner merger, Judge David Sentelle of the D.C. Circuit said to the government’s legal counsel:

If you’re going to rely on an economic model, you have to rely on it with quantification. The bare theorem . . . doesn’t prove anything in a particular case.

The government could not credibly reply to that query in the AT&T case and, if appropriately challenged, could not do so in this case.

Far from being a market that calls out for federal antitrust intervention, the smartphone market offers what appears to be an almost textbook case of dynamic efficiency. For over a decade, implementers, along with sympathetic regulators and commentators, have argued that the market suffers (or, in a variation, will imminently suffer) from inflated prices, reduced output and delayed innovation as a result of “patent hold-up” and “royalty stacking” by opportunistic patent owners. In the course of several decades that have passed since the launch of the GSM network, none of these predictions have yet to materialize. To the contrary. The market has exhibited expanding output, declining prices (adjusted for increased functionality), constant innovation, and regular entry into the production market. Multiple empirical studies (e.g. this, this and this) have found that device producers bear on average an aggregate royalty burden in the single to mid-digits.

This hardly seems like a market in which producers and consumers are being “victimized” by what the Northern District of California calls “unreasonably high” licensing fees (compared to an unspecified, and inherently unspecifiable, dynamically efficient benchmark). Rather, it seems more likely that device producers—many of whom provided the testimony which the court referenced in concluding that royalty rates were “unreasonably high”—would simply prefer to pay an even lower fee to R&D input suppliers (with no assurance that any of the cost-savings would flow to consumers).

Reason #3: The “License as Tax” Fallacy

The rhetorical centerpiece of the FTC’s brief relied on an analogy between the patent license fees earned by Qualcomm in the downstream device market and the tax that everyone pays to the IRS. The court’s opinion wholeheartedly adopted this narrative, determining that Qualcomm imposes a tax (or, as Judge Koh terms it, a “surcharge”) on the smartphone market by demanding a fee from OEMs for use of its patent portfolio whether or not the OEM purchases chipsets from Qualcomm or another firm. The tax analogy is fundamentally incomplete, both in general and in this case in particular.

It is true that much of the economic literature applies monopoly taxation models to assess the deadweight losses attributed to patents. While this analogy facilitates analytical tractability, a “zero-sum” approach to patent licensing overlooks the value-creating “multiplier” effect that licensing generates in real-world markets. Specifically, broad-based downstream licensing by upstream patent owners—something to which SEP owners commit under FRAND principles—ensures that device makers can obtain the necessary technology inputs and, in doing so, facilitates entry by producers that do not have robust R&D capacities. All of that ultimately generates gains for consumers.

This “positive-sum” multiplier effect appears to be at work in the smartphone market. Far from acting as a tax, Qualcomm’s licensing policies appear to have promoted entry into the smartphone market, which has experienced fairly robust turnover in market leadership. While Apple and Samsung may currently dominate the U.S. market, they face intense competition globally from Chinese firms such as Huawei, Xiaomi and Oppo. That competitive threat is real. As of 2007, Nokia and Blackberry were the overwhelming market leaders and appeared to be indomitable. Yet neither can be found in the market today. That intense “gale of competition”, sustained by the fact that any downstream producer can access the required technology inputs upon payment of licensing fees to upstream innovators, challenges the view that Qualcomm’s licensing practices have somehow restrained market growth.

Concluding Thoughts: Antitrust Flashback

When competitive harms are so unclear (and competitive gains so evident), modern antitrust law sensibly prescribes forbearance. A famous “bad case” from antitrust history shows why.

In 1953, the Department of Justice won an antitrust suit against United Shoe Machinery Corporation, which had led innovation in shoe manufacturing equipment and subsequently dominated that market. United Shoe’s purportedly anti-competitive practices included a lease-only policy that incorporated training and repair services at no incremental charge. The court found this to be a coercive tie that preserved United Shoe’s dominant position, despite the absence of any evidence of competitive harm. Scholars have subsequently shown (e.g. this and  this; see also this) that the court did not adequately consider (at least) two efficiency explanations: (1) lease-only policies were widespread in the market because this facilitated access by smaller capital-constrained manufacturers, and (2) tying support services to equipment enabled United Shoe to avoid free-riding on its training services by other equipment suppliers. In retrospect, courts relied on a mere possibility theorem ultimately to order the break-up of a technological pioneer, with potentially adverse consequences for manufacturers that relied on its R&D efforts.

The court’s decision in FTC v. Qualcomm is a flashback to cases like United Shoe in which courts found liability and imposed dramatic remedies with little economic inquiry into competitive harm. It has become fashionable to assert that current antitrust law is too cautious in finding liability. Yet there is a sound reason why, outside price-fixing, courts generally insist that theories of antitrust liability include compelling evidence of competitive harm. Antitrust remedies are strong medicine and should be administered with caution. If courts and regulators do not zealously scrutinize the factual support for antitrust claims, then they are vulnerable to capture by private entities whose business objectives may depart from the public interest in competitive markets. While no antitrust fact-pattern is free from doubt, over two decades of market performance strongly favor the view that long-standing licensing arrangements in the smartphone market have resulted in substantial net welfare gains for consumers. If so, the prudent course of action is simply to leave the market alone.

[This post is the first in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]

[This post is authored by Randal C. Picker, James Parker Hall Distinguished Service Professor of Law at The University of Chicago Law School]

The European Commission just announced that it is investigating Amazon. The Commission’s concern is that Amazon is simultaneously acting as a ref and player: Amazon sells goods directly as a first party but also operates a platform on which it hosts goods sold by third parties (resellers) and those goods sometimes compete. And, next step, Amazon is said to choose which markets to enter as a private-label seller at least in part by utilizing information it gleans from the third-party sales it hosts.

Assuming there is a problem …

Were Amazon’s activities thought to be a problem, the natural remedies, whether through antitrust or more direct sector, industry-specific regulation, might be to bar Amazon from both being a direct seller and a platform. India has already passed a statute that effectuates some of those results, though it seems targeted at non-domestic companies.

A broad regulation that barred Amazon from being simultaneously a seller of first-party inventory and of third-party inventory presumably would lead to a dissolution of the company into separate companies in each of those businesses. A different remedy—a classic that goes back at least as far in the United States as the 1887 Commerce Act—would be to impose some sort of nondiscrimination obligation on Amazon and perhaps to couple that with some sort of business-line restriction—a quarantine—that would bar Amazon from entering markets though private labels.

But is there a problem?

Private labels have been around a long time and large retailers have faced buy-vs.-build decisions along the way. Large, sophisticated retailers like A&P in a different era and Walmart and Costco today, just to choose two examples, are constantly rebalancing their inventory between that which they buy from third parties and that which they produce for themselves. As I discuss below, being a platform matters for the buy-vs.-build decision, but it is far from clear that being both a store and a platform simultaneously matters importantly for how we should look at these issues.

Of course, when Amazon opened for business in July 1995 it didn’t quite face these issues immediately. Amazon sold books—it billed itself as “Earth’s Biggest Bookstore”—but there is no private label possibility for books, no effort to substitute into just selling say “The Wit and Wisdom of Jeff Bezos.” You could of course build an ebooks platform—call that a Kindle—but that would be a decade or so down the road. But as Amazon expanded into more pedestrian goods, it would, like other retailers, naturally make decisions about which inventory to source internally and which to buy from third parties.

In September 1999, Amazon opened up what was being described as an online mall. Amazon called it zShops and the idea was clear: many customers came to Amazon to buy things that Amazon wasn’t offering and Amazon would bring that audience and a variety of transaction services to third parties. Third parties would in turn pay Amazon a monthly fee and a variety of transaction fees. Amazon CEO Jeff Bezos noted (as reported in The Wall Street Journal) that those prices had been set in a way to make Amazon generally “neutral” in choosing whether to enter a market through first-party inventory or through third-party inventory.

Note that a traditional retailer and the original Amazon faced a natural question which was which goods to carry in inventory? When Amazon opened its platform, Amazon changed powerfully the question of which goods to stock. Even a Walmart Supercenter has limited physical shelf space and has to take something off of the shelves to stock a new product. By becoming a platform, Amazon largely outsourced the product selection and shelf space allocation question to third parties. The new Amazon resellers would get access to Amazon’s substantial customer base—its audience—and to a variety of transactional services that Amazon would provide them.

An online retailer has some real informational advantages over physical stores, as the online retailer sees every product that customers search for. It is much harder, though not impossible, for a physical store to capture that information. But as Amazon became a platform it would no longer just observe search queries for goods but it would see actual sales by the resellers. And a physical store isn’t a platform in the way that Amazon is as the physical store is constrained by limited shelf space. But the real target here is the marginal information Amazon gets from third-party sales relative to what it would see from product searches at Amazon, its own first-party sales and from clicks on the growing amount of advertising it sells on its website.

All of that might matter for running product and inventory experiments and the corresponding pace of learning what goods customers want at what price. A physical store has to remove some item from its shelves to experiment with a new item and has to buy the item to stock it, though how much of a risk it is taking there will depend on whether the retailer can return unsold goods to the inventory supplier. A platform retailer like Amazon doesn’t have to make those tradeoffs and an online mall could offer almost an infinite inventory of items. A store or product ready for every possible search.

A possible strategy

All of this suggests a possible business strategy for a platform: let third parties run inventory experiments where the platform gets to see the results. Products that don’t sell are failed experiments and the platform doesn’t enter those markets. But when a third-party sells a product in real numbers, start selling that product as first-party inventory. Amazon then would face buy vs. build on that and that should make clear that the private brands question is distinct from the question of whether Amazon can leverage third-party reseller information to their detriment. It can certainly do just that by buying competing goods from a wholesaler and stocking that item as first-party Amazon inventory.

If Amazon is playing this strategy, it seems to be playing it slowly and poorly. Amazon CEO Jeff Bezos includes a letter each year to open Amazon’s annual report to shareholders. In the 2018 letter, Bezos opened by noting that “[s]omething strange and remarkable has happened over the last 20 years.” What was that? In 1999, the relevant number was 3%; five years later, in 2004, it was 25%, then 31% in 2009, 49% in 2014 and 58% in 2018. These were the percentage of physical gross merchandise sales by third-party sellers through Amazon. In 1993, 97% of Amazon’s sales were of its own first-party inventory but the percentage of third-party sales had steadily risen over 20 years and over the last four years of that period, third-party inventory sales exceeded Amazon’s own internal sales. As Bezos noted, Amazon’s first-party sales had grown dramatically—a 25% annual compound growth rate over that period—but in 2018, total third-party sales revenues were $160 billion while Amazon’s own first-party sales were at $117 billion. Bezos had a perspective on all of that—“Third-party sellers are kicking our first party butt. Badly.”—but if you believed the original vision behind creating the Amazon platform, Amazon should be indifferent between first-party sales and third-party sales, as long as all of that happens at Amazon.

This isn’t new

Given all of that, it isn’t crystal clear to me why Amazon gets as much attention as it does. The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws.

As suggested above, I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

In conclusion

There is a great deal more to say about a company as complex as Amazon, but two thoughts in closing. One story here is that Amazon has built a superior business model in combining first-party and third-party inventory sales and that is exactly the kind of business model innovation that we should applaud. Amazon has enjoyed remarkable growth but Walmart is still vastly larger than Amazon (ballpark numbers for 2018 are roughly $510 billion in net sales for Walmart vs. roughly $233 billion for Amazon – including all 3rd party sales, as well as Amazon Web Services). The second story is the remarkable growth of sales by resellers at Amazon.

If Amazon is creating private-label goods based on information it sees on its platform, nothing suggests that it is doing that particularly rapidly. And even if it is entering those markets, it still might do that were we to break up Amazon and separate the platform piece of Amazon (call it Amazon Platform) from the original first-party version of Amazon (say Amazon Classic) as traditional retailers have for a very, very long time been making buy-vs.-build decisions on their first-party inventory and using their internal information to make those decisions.

[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.