Archives For truth on the market

[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.

I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)

As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”

This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.

Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.

In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).

The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.

The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.

What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.

Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.

As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.

Last year, real estate developer Alastair Mactaggart spent nearly $3.5 million to put a privacy law on the ballot in California’s November election. He then negotiated a deal with state lawmakers to withdraw the ballot initiative if they passed their own privacy bill. That law — the California Consumer Privacy Act (CCPA) — was enacted after only seven days of drafting and amending. CCPA will go into effect six months from today.

According to Mactaggart, it all began when he spoke with a Google engineer and was shocked to learn how much personal data the company collected. This revelation motivated him to find out exactly how much of his data Google had. Perplexingly, instead of using Google’s freely available transparency tools, Mactaggart decided to spend millions to pressure the state legislature into passing new privacy regulation.

The law has six consumer rights, including the right to know; the right of data portability; the right to deletion; the right to opt-out of data sales; the right to not be discriminated against as a user; and a private right of action for data breaches.

So, what are the law’s prospects when it goes into effect next year? Here are ten reasons why CCPA is going to be a dumpster fire.

1. CCPA compliance costs will be astronomical

“TrustArc commissioned a survey of the readiness of 250 firms serving California from a range of industries and company size in February 2019. It reports that 71 percent of the respondents expect to spend at least six figures in CCPA-related privacy compliance expenses in 2019 — and 19 percent expect to spend over $1 million. Notably, if CCPA were in effect today, 86 percent of firms would not be ready. An estimated half a million firms are liable under the CCPA, most of which are small- to medium-sized businesses. If all eligible firms paid only $100,000, the upfront cost would already be $50 billion. This is in addition to lost advertising revenue, which could total as much as $60 billion annually. (AEI / Roslyn Layton)

2. CCPA will be good for Facebook and Google (and bad for small ad networks)

“It’s as if the privacy activists labored to manufacture a fearsome cannon with which to subdue giants like Facebook and Google, loaded it with a scattershot set of legal restrictions, aimed it at the entire ads ecosystem, and fired it with much commotion. When the smoke cleared, the astonished activists found they’d hit only their small opponents, leaving the giants unharmed. Meanwhile, a grinning Facebook stared back at the activists and their mighty cannon, the weapon that they had slyly helped to design.” (Wired / Antonio García Martínez)

“Facebook and Google ultimately are not constrained as much by regulation as by users. The first-party relationship with users that allows these companies relative freedom under privacy laws comes with the burden of keeping those users engaged and returning to the app, despite privacy concerns.” (Wired / Antonio García Martínez)

3. CCPA will enable free-riding by users who opt out of data sharing

“[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, CCPA enables free riders—individuals that opt out but still expect the same services and price—and undercuts access to free content and services. Someone must pay for free services, and if individuals opt out of their end of the bargain—by allowing companies to use their data—they make others pay more, either directly or indirectly with lower quality services. CCPA tries to compensate for the drastic reduction in the effectiveness of online advertising, an important source of income for digital media companies, by forcing businesses to offer services even though they cannot effectively generate revenue from users.” (ITIF / Daniel Castro and Alan McQuinn)

4. CCPA is potentially unconstitutional as-written

“[T]he law potentially applies to any business throughout the globe that has/gets personal information about California residents the moment the business takes the first dollar from a California resident. Furthermore, the law applies to some corporate affiliates (parent, subsidiary, or commonly owned companies) of California businesses, even if those affiliates have no other ties to California. The law’s purported application to businesses not physically located in California raises potentially significant dormant Commerce Clause and other Constitutional problems.” (Eric Goldman)

5. GDPR compliance programs cannot be recycled for CCPA

“[C]ompanies cannot just expand the coverage of their EU GDPR compliance measures to residents of California. For example, the California Consumer Privacy Act:

  • Prescribes disclosures, communication channels (including toll-free phone numbers) and other concrete measures that are not required to comply with the EU GDPR.
  • Contains a broader definition of “personal data” and also covers information pertaining to households and devices.
  • Establishes broad rights for California residents to direct deletion of data, with differing exceptions than those available under GDPR.
  • Establishes broad rights to access personal data without certain exceptions available under GDPR (e.g., disclosures that would implicate the privacy interests of third parties).
  • Imposes more rigid restrictions on data sharing for commercial purposes.”

(IAPP / Lothar Determann)

6. CCPA will be a burden on small- and medium-sized businesses

“The law applies to businesses operating in California if they generate an annual gross revenue of $25 million or more, if they annually receive or share personal information of 50,000 California residents or more, or if they derive at least 50 percent of their annual revenue by “selling the personal information” of California residents. In effect, this means that businesses with websites that receive traffic from an average of 137 unique Californian IP addresses per day could be subject to the new rules.” (ITIF / Daniel Castro and Alan McQuinn)

CCPA “will apply to more than 500,000 U.S. companies, the vast majority of which are small- to medium-sized enterprises.” (IAPP / Rita Heimes and Sam Pfeifle)

7. CCPA’s definition of “personal information” is extremely over-inclusive

“CCPA likely includes gender information in the “personal information” definition because it is “capable of being associated with” a particular consumer when combined with other datasets. We can extend this logic to pretty much every type or class of data, all of which become re-identifiable when combined with enough other datasets. Thus, all data related to individuals (consumers or employees) in a business’ possession probably qualifies as “personal information.” (Eric Goldman)

“The definition of “personal information” includes “household” information, which is particularly problematic. A “household” includes the consumer and other co-habitants, which means that a person’s “personal information” oxymoronically includes information about other people. These people’s interests may diverge, such as with separating spouses, multiple generations under the same roof, and roommates. Thus, giving a consumer rights to access, delete, or port “household” information affects other people’s information, which may violate their expectations and create major security and privacy risks.” (Eric Goldman)

8. CCPA penalties might become a source for revenue generation

“According to the new Cal. Civ. Code §1798.150, companies that become victims of data theft or other data security breaches can be ordered in civil class action lawsuits to pay statutory damages between $100 to $750 per California resident and incident, or actual damages, whichever is greater, and any other relief a court deems proper, subject to an option of the California Attorney General’s Office to prosecute the company instead of allowing civil suits to be brought against it.” (IAPP / Lothar Determann)

“According to the new Cal. Civ. Code §1798.155, companies can be ordered in a civil action brought by the California Attorney General’s Office to pay penalties of up to $7,500 per intentional violation of any provision of the California Consumer Privacy Act, or, for unintentional violations, if the company fails to cure the unintentional violation within 30 days of notice, $2,500 per violation under Section 17206 of the California Business and Professions Code. Twenty percent of such penalties collected by the State of California shall be allocated to a new “Consumer Privacy Fund” to fund enforcement.” (IAPP / Lothar Determann)

“[T]he Attorney General, through its support of SB 561, is seeking to remove this provision, known as a “30-day cure,” arguing that it would be able to secure more civil penalties and thus increase enforcement. Specifically, the Attorney General has said it needs to raise $57.5 million in civil penalties to cover the cost of CCPA enforcement.”  (ITIF / Daniel Castro and Alan McQuinn)

9. CCPA is inconsistent with existing privacy laws

“California has led the United States and often the world in codifying privacy protections, enacting the first laws requiring notification of data security breaches (2002) and website privacy policies (2004). In the operative section of the new law, however, the California Consumer Privacy Act’s drafters did not address any overlap or inconsistencies between the new law and any of California’s existing privacy laws, perhaps due to the rushed legislative process, perhaps due to limitations on the ability to negotiate with the proponents of the Initiative. Instead, the new Cal. Civ. Code §1798.175 prescribes that in case of any conflicts with California laws, the law that affords the greatest privacy protections shall control.” (IAPP / Lothar Determann)

10. CCPA will need to be amended, creating uncertainty for businesses

As of now, a dozen bills amending CCPA have passed the California Assembly and continue to wind their way through the legislative process. California lawmakers have until September 13th to make any final changes to the law before it goes into effect. In the meantime, businesses have to begin compliance preparations under a cloud of uncertainty about what the says today — or what it might even say in the future.

More than a century of bad news

Bill Gates recently tweeted the image below, commenting that he is “always amazed by the disconnect between what we see in the news and the reality of the world around us.”

https://pbs.twimg.com/media/D8zWfENUYAAvK5I.png

Of course, this chart and Gates’s observation are nothing new – there has long been an accuracy gap between what the news covers (and therefore what Americans believe is important) and what is actually important. As discussed in one academic article on the subject:

The line between journalism and entertainment is dissolving even within traditional news formats. [One] NBC executive [] decreed that every news story should “display the attributes of fiction, of drama. It should have structure and conflict, problem and denouement, rising action and falling action, a beginning, a middle and an end.” … This has happened both in broadcast and print journalism. … Roger Ailes … explains this phenomenon with an Orchestra Pit Theory: “If you have two guys on a stage and one guy says, ‘I have a solution to the Middle East problem,’ and the other guy falls in the orchestra pit, who do you think is going to be on the evening news?”

Matters of policy get increasingly short shrift. In 1968, the network newscasts generally showed presidential candidates speaking, and on the average a candidate was shown speaking uninterrupted for forty-two seconds. Over the next twenty years, these sound bites had shrunk to an average of less than ten seconds. This phenomenon is by no means unique to broadcast journalism; there has been a parallel decline in substance in print journalism as well. …

The fusing of news and entertainment is not accidental. “I make no bones about it—we have to be entertaining because we compete with entertainment options as well as other news stories,” says the general manager of a Florida TV station that is famous, or infamous, for boosting the ratings of local newscasts through a relentless focus on stories involving crime and calamity, all of which are presented in a hyperdramatic tone (the so-called “If It Bleeds, It Leads” format). There was a time when news programs were content to compete with other news programs, and networks did not expect news divisions to be profit centers, but those days are over.

That excerpt feels like it could have been written today. It was not: it was published in 1996. The “if it bleeds, it leads” trope is often attributed to a 1989 New York magazine article – and once introduced into the popular vernacular it grew quickly in popularity:

Of course, the idea that the media sensationalizes its reporting is not a novel observation. “If it bleeds, it leads” is just the late-20th century term for what had been “sex sells” – and the idea of yellow journalism before then. And, of course, “if it bleeds” is the precursor to our more modern equivalent of “clickbait.”

The debate about how to save the press from Google and Facebook … is the wrong debate to have

We are in the midst of a debate about how to save the press in the digital age. The House Judiciary Committee recently held a hearing on the relationship between online platforms and the press; and the Australian Competition & Consumer Commission recently released a preliminary report on the same topic.

In general, these discussions focus on concerns that advertising dollars have shifted from analog-era media in the 20th century to digital platforms in the 21st century – leaving the traditional media underfunded and unable to do its job. More specifically, competition authorities are being urged (by the press) to look at this through the lens of antitrust, arguing that Google and Facebook are the dominant two digital advertising platforms and have used their market power to harm the traditional media.

I have previously explained that this is bunk; as has John Yun, critiquing current proposals. I won’t rehash those arguments here, beyond noting that traditional media’s revenues have been falling since the advent of the Internet – not since the advent of Google or Facebook. The problem that the traditional media face is not that monopoly platforms are engaging in conduct that is harmful to them – it is that the Internet is better both as an advertising and information-distribution platform such that both advertisers and information consumers have migrated to digital platforms (and away from traditional news media).

This is not to say that digital platforms are capable of, or well-suited to, the production and distribution of the high-quality news and information content that we have historically relied on the traditional media to produce. Yet, contemporary discussions about whether traditional news media can survive in an era where ad revenue accrues primarily to large digital platforms have been surprisingly quiet on the question of the quality of content produced by the traditional media.

Actually, that’s not quite true. First, as indicated by the chart tweeted by Gates, digital platforms may be providing consumers with information that is more relevant to them.

Second, and more important, media advocates argue that without the ad revenue that has been diverted (by advertisers, not by digital platforms) to firms like Google and Facebook they lack the resources to produce high quality content. But that assumes that they would produce high quality content if they had access to those resources. As Gates’s chart – and the last century of news production – demonstrates, that is an ill-supported claim. History suggests that, left to its own devices and not constrained for resources by competition from digital platforms, the traditional media produces significant amounts of clickbait.

It’s all about the Benjamins

Among critics of the digital platforms, there is a line of argument that the advertising-based business model is the original sin of the digital economy. The ad-based business model corrupts digital platforms and turns them against their users – the user, that is, becomes the product in the surveillance capitalism state. We would all be much better off, the argument goes, if the platforms operated under subscription- or micropayment-based business models.

It is noteworthy that press advocates eschew this line of argument. Their beef with the platforms is that they have “stolen” the ad revenue that rightfully belongs to the traditional media. The ad revenue, of course, that is the driver behind clickbait, “if it bleeds it leads,” “sex sells,” and yellow journalism. The original sin of advertising-based business models is not original to digital platforms – theirs is just an evolution of the model perfected by the traditional media.

I am a believer in the importance of the press – and, for that matter, for the efficacy of ad-based business models. But more than a hundred years of experience makes clear that mixing the two into the hybrid bastard that is infotainment should prompt concern and discussion about the business model of the traditional press (and, indeed, for most of the past 30 years or so it has done so).

When it comes to “saving the press” the discussion ought not be about how to restore traditional media to its pre-Facebook glory days of the early aughts, or even its pre-modern Internet gold age of the late 1980s. By that point, the media was well along the slippery slope to where it is today. We desperately need a strong, competitive market for news and information. We should use the crisis that that market currently is in to discuss solutions for the future, not how to preserve the past.

In an amicus brief filed last Friday, a diverse group of antitrust scholars joined the Washington Legal Foundation in urging the U.S. Court of Appeals for the Second Circuit to vacate the Federal Trade Commission’s misguided 1-800 Contacts decision. Reasoning that 1-800’s settlements of trademark disputes were “inherently suspect,” the FTC condemned the settlements under a cursory “quick look” analysis. In so doing, it improperly expanded the category of inherently suspect behavior and ignored an obvious procompetitive justification for the challenged settlements.  If allowed to stand, the Commission’s decision will impair intellectual property protections that foster innovation.

A number of 1-800’s rivals purchased online ad placements that would appear when customers searched for “1-800 Contacts.” 1-800 sued those rivals for trademark infringement, and the lawsuits settled. As part of each settlement, 1-800 and its rival agreed not to bid on each other’s trademarked terms in search-based keyword advertising. (For example, EZ Contacts could not bid on a placement tied to a search for 1-800 Contacts, and vice-versa). Each party also agreed to employ “negative keywords” to ensure that its ads would not appear in response to a consumer’s online search for the other party’s trademarks. (For example, in bidding on keywords, 1-800 would have to specify that its ad must not appear in response to a search for EZ Contacts, and vice-versa). Notably, the settlement agreements didn’t restrict the parties’ advertisements through other media such as TV, radio, print, or other forms of online advertising. Nor did they restrict paid search advertising in response to any search terms other than the parties’ trademarks.

The FTC concluded that these settlement agreements violated the antitrust laws as unreasonable restraints of trade. Although the agreements were not unreasonable per se, as naked price-fixing is, the Commission didn’t engage in the normally applicable rule of reason analysis to determine whether the settlements passed muster. Instead, the Commission condemned the settlements under the truncated analysis that applies when, in the words of the Supreme Court, “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” The Commission decided that no more than a quick look was required because the settlements “restrict the ability of lower cost online sellers to show their ads to consumers.”

That was a mistake. First, the restraints in 1-800’s settlements are far less extensive than other restraints that the Supreme Court has said may not be condemned under a cursory quick look analysis. In California Dental, for example, the Supreme Court reversed a Ninth Circuit decision that employed the quick look analysis to condemn a de facto ban on all price and “comfort” advertising by members of a dental association. In light of the possibility that the ban could reduce misleading ads, enhance customer trust, and thereby stimulate demand, the Court held that the restraint must be assessed under the more probing rule of reason. A narrow limit on the placement of search ads is far less restrictive than the all-out ban for which the California Dental Court prescribed full-on rule of reason review.

1-800’s settlements are also less likely to be anticompetitive than are other settlements that the Supreme Court has said must be evaluated under the rule of reason. The Court’s Actavis decision rejected quick look and mandated full rule of reason analysis for reverse payment settlements of pharmaceutical patent litigation. In a reverse payment settlement, the patent holder pays an alleged infringer to stay out of the market for some length of time. 1-800’s settlements, by contrast, did not exclude its rivals from the market, place any restrictions on the content of their advertising, or restrict the placement of their ads except on webpages responding to searches for 1-800’s own trademarks. If the restraints in California Dental and Actavis required rule of reason analysis, then those in 1-800’s settlements surely must as well.

In addition to disregarding Supreme Court precedents that limit when mere quick look is appropriate, the FTC gave short shrift to a key procompetitive benefit of the restrictions in 1-800’s settlements. 1-800 spent millions of dollars convincing people that they could save money by ordering prescribed contact lenses from a third party rather than buying them from prescribing optometrists. It essentially built the online contact lens market in which its rivals now compete. In the process, it created a strong trademark, which undoubtedly boosts its own sales. (Trademarks point buyers to a particular seller and enhance consumer confidence in the seller’s offering, since consumers know that branded sellers will not want to tarnish their brands with shoddy products or service.)

When a rival buys ad space tied to a search for 1-800 Contacts, that rival is taking a free ride on 1-800’s investments in its own brand and in the online contact lens market itself. A rival that has advertised less extensively than 1-800—primarily because 1-800 has taken the lead in convincing consumers to buy their contact lenses online—will incur lower marketing costs than 1-800 and may therefore be able to underprice it.  1-800 may thus find that it loses sales to rivals who are not more efficient than it is but have lower costs because they have relied on 1-800’s own efforts.

If market pioneers like 1-800 cannot stop this sort of free-riding, they will have less incentive to make the investments that create new markets and develop strong trade names. The restrictions in the 1-800 settlements were simply an effort to prevent inefficient free-riding while otherwise preserving the parties’ freedom to advertise. They were a narrowly tailored solution to a problem that hurt 1-800 and reduced incentives for future investments in market-developing activities that inure to the benefit of consumers.

Rule of reason analysis would have allowed the FTC to assess the full market effects of 1-800’s settlements. The Commission’s truncated assessment, which was inconsistent with Supreme Court decisions on when a quick look will suffice, condemned conduct that was likely procompetitive. The Second Circuit should vacate the FTC’s order.

The full amicus brief, primarily drafted by WLF’s Corbin Barthold and joined by Richard Epstein, Keith Hylton, Geoff Manne, Hal Singer, and me, is here.

This guest post is by Corbin K. Barthold, Litigation Counsel at Washington Legal Foundation.

Complexity need not follow size. A star is huge but mostly homogenous. “It’s core is so hot,” explains Martin Rees, “that no chemicals can exist (complex molecules get torn apart); it is basically an amorphous gas of atomic nuclei and electrons.”

Nor does complexity always arise from remoteness of space or time. Celestial gyrations can be readily grasped. Thales of Miletus probably predicted a solar eclipse. Newton certainly could have done so. And we’re confident that in 4.5 billion years the Andromeda galaxy will collide with our own.

If the simple can be seen in the large and the distant, equally can the complex be found in the small and the immediate. A double pendulum is chaotic. Likewise the local weather, the fluctuations of a wildlife population, or the dispersion of the milk you pour into your coffee.

Our economy is not like a planetary orbit. It’s more like the weather or the milk. No one knows which companies will become dominant, which products will become popular, or which industries will become defunct. No one can see far ahead. Investing is inherently risky because the future of the economy, or even a single segment of it, is intractably uncertain. Do not hand your savings to any expert who says otherwise. Experts, in fact, often see the least of all.

But if a broker with a “sure thing” stock is a mountebank, what does that make an antitrust scholar with an “optimum structure” for a market? 

Not a prophet.

There is so much that we don’t know. Consider, for example, the notion that market concentration is a good measure of market competitiveness. The idea seems intuitive enough, and in many corners it remains an article of faith.

But the markets where this assumption is most plausible—hospital care and air travel come to mind—are heavily shaped by that grand monopolist we call government. Only a large institution can cope with the regulatory burden placed on the healthcare industry. As Tyler Cowen writes, “We get the level of hospital concentration that we have in essence chosen through politics and the law.”

As for air travel: the government promotes concentration by barring foreign airlines from the domestic market. In any case, the state of air travel does not support a straightforward conclusion that concentration equals power. The price of flying has fallen almost continuously since passage of the Airline Deregulation Act in 1978. The major airlines are disciplined by fringe carriers such as JetBlue and Southwest.

It is by no means clear that, aside from cases of government-imposed concentration, a consolidated market is something to fear. Technology lowers costs, lower costs enable scale, and scale tends to promote efficiency. Scale can arise naturally, therefore, from the process of creating better and cheaper products.

Say you’re a nineteenth-century cow farmer, and the railroad reaches you. Your shipping costs go down, and you start to sell to a wider market. As your farm grows, you start to spread your capital expenses over more sales. Your prices drop. Then refrigerated rail cars come along, you start slaughtering your cows on site, and your shipping costs go down again. Your prices drop further. Farms that fail to keep pace with your cost-cutting go bust. The cycle continues until beef is cheap and yours is one of the few cow farms in the area. The market improves as it consolidates.

As the decades pass, this story repeats itself on successively larger stages. The relentless march of technology has enabled the best companies to compete for regional, then national, and now global market share. We should not be surprised to see ever fewer firms offering ever better products and services.

Bear in mind, moreover, that it’s rarely the same company driving each leap forward. As Geoffrey Manne and Alec Stapp recently noted in this space, markets are not linear. Just after you adopt the next big advance in the logistics of beef production, drone delivery will disrupt your delivery network, cultured meat will displace your product, or virtual-reality flavoring will destroy your industry. Or—most likely of all—you’ll be ambushed by something you can’t imagine.

Does market concentration inhibit innovation? It’s possible. “To this day,” write Joshua Wright and Judge Douglas Ginsburg, “the complex relationship between static product market competition and the incentive to innovate is not well understood.” 

There’s that word again: complex. When will thumping company A in an antitrust lawsuit increase the net amount of innovation coming from companies A, B, C, and D? Antitrust officials have no clue. They’re as benighted as anyone. These are the people who will squash Blockbuster’s bid to purchase a rival video-rental shop less than two years before Netflix launches a streaming service.

And it’s not as if our most innovative companies are using market concentration as an excuse to relax. If its only concern were maintaining Google’s grip on the market for internet-search advertising, Alphabet would not have spent $16 billion on research and development last year. It spent that much because its long-term survival depends on building the next big market—the one that does not exist yet.

No expert can reliably make the predictions necessary to say when or how a market should look different. And if we empowered some experts to make such predictions anyway, no other experts would be any good at predicting what the empowered experts would predict. Experts trying to give us “well structured” markets will instead give us a costly, politicized, and stochastic antitrust enforcement process. 

Here’s a modest proposal. Instead of using the antitrust laws to address the curse of bigness, let’s create the Office of the Double Pendulum. We can place the whole section in a single room at the Justice Department. 

All we’ll need is some ping-pong balls, a double pendulum, and a monkey. On each ball will be the name of a major corporation. Once a quarter—or a month; reasonable minds can differ—a ball will be drawn, and the monkey prodded into throwing the pendulum. An even number of twirls saves the company on the ball. An odd number dooms it to being broken up.

This system will punish success just as haphazardly as anything our brightest neo-Brandeisian scholars can devise, while avoiding the ruinously expensive lobbying, rent-seeking, and litigation that arise when scholars succeed in replacing the rule of law with the rule of experts.

All hail the chaos monkey. Unutterably complex. Ineffably simple.

Last week the Senate Judiciary Committee held a hearing, Intellectual Property and the Price of Prescription Drugs: Balancing Innovation and Competition, that explored whether changes to the pharmaceutical patent process could help lower drug prices.  The committee’s goal was to evaluate various legislative proposals that might facilitate the entry of cheaper generic drugs, while also recognizing that strong patent rights for branded drugs are essential to incentivize drug innovation.  As Committee Chairman Lindsey Graham explained:

One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.

Several proposals that were discussed at the hearing have the potential to encourage competition in the pharmaceutical industry and help rein in drug prices. Below, I discuss these proposals, plus a few additional reforms. I also point out some of the language in the current draft proposals that goes a bit too far and threatens the ability of drug makers to remain innovative.  

1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies.  Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements.  The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples.  It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol.  Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition.  Increased generic competition should, in turn, reduce drug prices.

2. Restrict abuses of FDA Citizen Petitions.  The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval.  However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry.  Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses.  The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry.  However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA. 

3. Curtail Anticompetitive Pay-for-Delay Settlements.  The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials.  The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead.  As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients. 

However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers.  Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company.  A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place.  Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value.  Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.

4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers.  I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation.  Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process.  And because IPR proceedings do not have a standing requirement, the process has been exploited  by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices. 

The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.  By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers.  This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.

5. Curb illegal product hopping and patent thickets.  Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.”  At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug.  Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug.  The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.

However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor.  Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court.  Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court.  As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions.  Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation. 

Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices.  However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.

It might surprise some readers to learn that we think the Court’s decision today in Apple v. Pepper reaches — superficially — the correct result. But, we hasten to add, the Court’s reasoning (and, for that matter, the dissent’s) is completely wrongheaded. It would be an understatement to say that the Court reached the right result for the wrong reason; in fact, the Court’s analysis wasn’t even in the same universe as the correct reasoning.

Below we lay out our assessment, in a post drawn from an article forthcoming in the Nebraska Law Review.

Did the Court forget that, just last year, it decided Amex, the most significant U.S. antitrust case in ages?

What is most remarkable about the decision (and the dissent) is that neither mentions Ohio v. Amex, nor even the two-sided market context in which the transactions at issue take place.

If the decision in Apple v. Pepper hewed to the precedent established by Ohio v. Amex it would start with the observation that the relevant market analysis for the provision of app services is an integrated one, in which the overall effect of Apple’s conduct on both app users and app developers must be evaluated. A crucial implication of the Amex decision is that participants on both sides of a transactional platform are part of the same relevant market, and the terms of their relationship to the platform are inextricably intertwined.

Under this conception of the market, it’s difficult to maintain that either side does not have standing to sue the platform for the terms of its overall pricing structure, whether the specific terms at issue apply directly to that side or not. Both end users and app developers are “direct” purchasers from Apple — of different products, but in a single, inextricably interrelated market. Both groups should have standing.

More controversially, the logic of Amex also dictates that both groups should be able to establish antitrust injury — harm to competition — by showing harm to either group, as long as it establishes the requisite interrelatedness of the two sides of the market.

We believe that the Court was correct to decide in Amex that effects falling on the “other” side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis.

As a result, the Court’s holding in Amex should also have required a finding in Apple v. Pepper that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.

Relative to a strict construction of the traditional baseline, the former entails imposing an additional burden on two-sided market plaintiffs, while the latter entails a lessening of that burden. Whether the net effect is more or fewer successful cases in two-sided markets is unclear, of course. But from the perspective of aligning evidentiary and substantive doctrine with economic reality such an approach would be a clear improvement.

Critics accuse the Court of making antitrust cases unwinnable against two-sided market platforms thanks to Amex’s requirement that a prima facie showing of anticompetitive effect requires assessment of the effects on both sides of a two-sided market and proof of a net anticompetitive outcome. The critics should have been chastened by a proper decision in Apple v. Pepper. As it is, the holding (although not the reasoning) still may serve to undermine their fears.

But critics should have recognized that a necessary corollary of Amex’s “expanded” market definition is that, relative to previous standing doctrine, a greater number of prospective parties should have standing to sue.

More important, the Court in Apple v. Pepper should have recognized this. Although nominally limited to the indirect purchaser doctrine, the case presented the Court with an opportunity to grapple with this logical implication of its Amex decision. It failed to do so.

On the merits, it looks like Apple should win. But, for much the same reason, the Respondents in Apple v. Pepper should have standing

This does not, of course, mean that either party should win on the merits. Indeed, on the merits of the case, the Petitioner in Apple v. Pepper appears to have the stronger argument, particularly in light of Amex which (assuming the App Store is construed as some species of a two-sided “transaction” market) directs that Respondent has the burden of considering harms and efficiencies across both sides of the market.

At least on the basis of the limited facts as presented in the case thus far, Respondents have not remotely met their burden of proving anticompetitive effects in the relevant market.

The actual question presented in Apple v. Pepper concerns standing, not whether the plaintiffs have made out a viable case on the merits. Thus it may seem premature to consider aspects of the latter in addressing the former. But the structure of the market considered by the court should be consistent throughout its analysis.

Adjustments to standing in the context of two-sided markets must be made in concert with the nature of the substantive rule of reason analysis that will be performed in a case. The two doctrines are connected not only by the just demands for consistency, but by the error-cost framework of the overall analysis, which runs throughout the stages of an antitrust case.

Here, the two-sided markets approach in Amex properly understands that conduct by a platform has relevant effects on both sides of its interrelated two-sided market. But that stems from the actual economics of the platform; it is not merely a function of a judicial construct. It thus holds true at all stages of the analysis.

The implication for standing is that users on both sides of a two-sided platform may suffer similarly direct (or indirect) injury as a result of the platform’s conduct, regardless of the side to which that conduct is nominally addressed.

The consequence, then, of Amex’s understanding of the market is that more potential plaintiffs — specifically, plaintiffs on both sides of a two-sided market — may claim to suffer antitrust injury.

Why the myopic focus of the holding (and dissent) on Illinois Brick is improper: It’s about the market definition, stupid!

Moreover, because of the Amex understanding, the problem of analyzing the pass-through of damages at issue in Illinois Brick (with which the Court entirely occupies itself in Apple v. Pepper) is either mitigated or inevitable.

In other words, either the users on the different sides of a two-sided market suffer direct injury without pass-through under a proper definition of the relevant market, or else their interrelatedness is so strong that, complicated as it may be, the needs of substantive accuracy trump the administrative costs in sorting out the incidence of the costs, and courts cannot avoid them.

Illinois Brick’s indirect purchaser doctrine was designed for an environment in which the relationship between producers and consumers is mediated by a distributor in a direct, linear supply chain; it was not designed for platforms. Although the question presented in Apple v. Pepper is explicitly about whether the Illinois Brick “indirect purchaser” doctrine applies to the Apple App Store, that determination is contingent on the underlying product market definition (whether the product market is in fact well-specified by the parties and the court or not).

Particularly where intermediaries exist precisely to address transaction costs between “producers” and “consumers,” the platform services they provide may be central to the underlying claim in a way that the traditional direct/indirect filters — and their implied relevant markets — miss.

Further, the Illinois Brick doctrine was itself based not on the substantive necessity of cutting off liability evaluations at a particular level of distribution, but on administrability concerns. In particular, the Court was concerned with preventing duplicative recovery when there were many potential groups of plaintiffs, as well as preventing injustices that would occur if unknown groups of plaintiffs inadvertently failed to have their rights adequately adjudicated in absentia. It was also concerned with avoiding needlessly complicated damages calculations.

But, almost by definition, the tightly coupled nature of the two sides of a two-sided platform should mitigate the concerns about duplicative recovery and unknown parties. Moreover, much of the presumed complexity in damages calculations in a platform setting arise from the nature of the platform itself. Assessing and apportioning damages may be complicated, but such is the nature of complex commercial relationships — the same would be true, for example, of damages calculations between vertically integrated companies that transact simultaneously at multiple levels, or between cross-licensing patent holders/implementers. In fact, if anything, the judicial efficiency concerns in Illinois Brick point toward the increased importance of properly assessing the nature of the product or service of the platform in order to ensure that it accurately encompasses the entire relevant transaction.

Put differently, under a proper, more-accurate market definition, the “direct” and “indirect” labels don’t necessarily reflect either business or antitrust realities.

Where the Court in Apple v. Pepper really misses the boat is in its overly formalistic claim that the business model (and thus the product) underlying the complained-of conduct doesn’t matter:

[W]e fail to see why the form of the upstream arrangement between the manufacturer or supplier and the retailer should determine whether a monopolistic retailer can be sued by a downstream consumer who has purchased a good or service directly from the retailer and has paid a higher-than-competitive price because of the retailer’s unlawful monopolistic conduct.

But Amex held virtually the opposite:

Because “[l]egal presumptions that rest on formalistic distinctions rather than actual market realities are generally disfavored in antitrust law,” courts usually cannot properly apply the rule of reason without an accurate definition of the relevant market.

* * *

Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market.

In the face of novel business conduct, novel business models, and novel economic circumstances, the degree of substantive certainty may be eroded, as may the reasonableness of the expectation that typical evidentiary burdens accurately reflect competitive harm. Modern technology — and particularly the platform business model endemic to many modern technology firms — presents a need for courts to adjust their doctrines in the face of such novel issues, even if doing so adds additional complexity to the analysis.

The unlearned market-definition lesson of the Eighth Circuit’s Campos v. Ticketmaster dissent

The Eight Circuit’s Campos v. Ticketmaster case demonstrates the way market definition shapes the application of the indirect purchaser doctrine. Indeed, the dissent in that case looms large in the Ninth Circuit’s decision in Apple v. Pepper. [Full disclosure: One of us (Geoff) worked on the dissent in Campos v. Ticketmaster as a clerk to Eighth Circuit judge Morris S. Arnold]

In Ticketmaster, the plaintiffs alleged that Ticketmaster abused its monopoly in ticket distribution services to force supracompetitve charges on concert venues — a practice that led to anticompetitive prices for concert tickets. Although not prosecuted as a two-sided market, the business model is strikingly similar to the App Store model, with Ticketmaster charging fees to venues and then facilitating ticket purchases between venues and concert goers.

As the dissent noted, however:

The monopoly product at issue in this case is ticket distribution services, not tickets.

Ticketmaster supplies the product directly to concert-goers; it does not supply it first to venue operators who in turn supply it to concert-goers. It is immaterial that Ticketmaster would not be supplying the service but for its antecedent agreement with the venues.

But it is quite relevant that the antecedent agreement was not one in which the venues bought some product from Ticketmaster in order to resell it to concert-goers.

More important, and more telling, is the fact that the entirety of the monopoly overcharge, if any, is borne by concert-goers.

In contrast to the situations described in Illinois Brick and the literature that the court cites, the venues do not pay the alleged monopoly overcharge — in fact, they receive a portion of that overcharge from Ticketmaster. (Emphasis added).

Thus, if there was a monopoly overcharge it was really borne entirely by concert-goers. As a result, apportionment — the complexity of which gives rise to the standard in Illinois Brick — was not a significant issue. And the antecedent transaction that allegedly put concertgoers in an indirect relationship with Ticketmaster is one in which Ticketmaster and concert venues divvied up the alleged monopoly spoils, not one in which the venues absorb their share of the monopoly overcharge.

The analogy to Apple v. Pepper is nearly perfect. Apple sits between developers on one side and consumers on the other, charges a fee to developers for app distribution services, and facilitates app sales between developers and users. It is possible to try to twist the market definition exercise to construe the separate contracts between developers and Apple on one hand, and the developers and consumers on the other, as some sort of complicated version of the classical manufacturing and distribution chains. But, more likely, it is advisable to actually inquire into the relevant factual differences that underpin Apple’s business model and adapt how courts consider market definition for two-sided platforms.

Indeed, Hanover Shoe and Illinois Brick were born out of a particular business reality in which businesses structured themselves in what are now classical production and distribution chains. The Supreme Court adopted the indirect purchaser rule as a prudential limitation on antitrust law in order to optimize the judicial oversight of such cases. It seems strangely nostalgic to reflexively try to fit new business methods into old legal analyses, when prudence and reality dictate otherwise.

The dissent in Ticketmaster was ahead of its time insofar as it recognized that the majority’s formal description of the ticket market was an artifact of viewing what was actually something much more like a ticket-services platform operated by Ticketmaster through the poor lens of the categories established decades earlier.

The Ticketmaster dissent’s observations demonstrate that market definition and antitrust standing are interrelated. It makes no sense to adhere to a restrictive reading of the latter if it connotes an economically improper understanding of the former. Ticketmaster provided an intermediary service — perhaps not quite a two-sided market, but something close — that stands outside a traditional manufacturing supply chain. Had it been offered by the venues themselves and bundled into the price of concert tickets there would be no question of injury and of standing (nor would market definition matter much, as both tickets and distribution services would be offered as a joint product by the same parties, in fixed proportions).

What antitrust standing doctrine should look like after Amex

There are some clear implications for antitrust doctrine that (should) follow from the preceding discussion.

A plaintiff has a choice to allege that a defendant operates either as a two-sided market or in a more traditional, linear chain during the pleading stage. If the plaintiff alleges a two-sided market, then, to demonstrate standing, it need only be shown that injury occurred to some subset of platform users with which the plaintiff is inextricably interrelated. The plaintiff would not need to demonstrate injury to him or herself, nor allege net harm, nor show directness.

In response, a defendant can contest standing by challenging the interrelatedness of the plaintiff and the group of platform users with whom the plaintiff claims interrelatedness. If the defendant does not challenge the allegation that it operates a two-sided market, it could not challenge standing by showing indirectness, that plaintiff had not alleged personal injury, or that plaintiff hasn’t alleged a net harm.

Once past a determination of standing, however, a plaintiff who pleads a two-sided market would not be able to later withdraw this allegation in order to lessen the attendant legal burdens.

If the court accepts that the defendant is operating a two-sided market, both parties would be required to frame their allegations and defenses in accordance with the nature of the two-sided market and thus the holding in Amex. This is critical because, whereas alleging a two-sided market may make it easier for plaintiffs to demonstrate standing, Amex’s requirement that net harm be demonstrated across interrelated sets of users makes it more difficult for plaintiffs to present a viable prima facie case. Further, defendants would not be barred from presenting efficiencies defenses based on benefits that interrelated users enjoy.

Conclusion: The Court in Apple v. Pepper should have acknowledged the implications of its holding in Amex

After Amex, claims against two-sided platforms might require more evidence to establish anticompetitive harm, but that business model also means that firms should open themselves up to a larger pool of potential plaintiffs. The legal principles still apply, but the relative importance of those principles to judicial outcomes shifts (or should shift) in line with the unique economic position of potential plaintiffs and defendants in a platform environment.

Whether a priori the net result is more or fewer cases and more or fewer victories for plaintiffs is not the issue; what matters is matching the legal and economic theory to the relevant facts in play. Moreover, decrying Amex as the end of antitrust was premature: the actual affect on injured parties can’t be known until other changes (like standing for a greater number of plaintiffs) are factored into the analysis. The Court’s holding in Apple v. Pepper sidesteps this issue entirely, and thus fails to properly move antitrust doctrine forward in line with its holding in Amex.

Of course, it’s entirely possible that platforms and courts might be inundated with expensive and difficult to manage lawsuits. There may be reasons of administrability for limiting standing (as Illinois Brick perhaps prematurely did for fear of the costs of courts’ managing suits). But then that should have been the focus of the Court’s decision.

Allowing standing in Apple v. Pepper permits exactly the kind of legal experimentation needed to enable the evolution of antitrust doctrine along with new business realities. But in some ways the Court reached the worst possible outcome. It announced a rule that permits more plaintiffs to establish standing, but it did not direct lower courts to assess standing within the proper analytical frame. Instead, it just expands standing in a manner unmoored from the economic — and, indeed, judicial — context. That’s not a recipe for the successful evolution of antitrust doctrine.

In 2014, Benedict Evans, a venture capitalist at Andreessen Horowitz, wrote “Why Amazon Has No Profits (And Why It Works),” a blog post in which he tried to explain Amazon’s business model. He began with a chart of Amazon’s revenue and net income that has now become (in)famous:

Source: Benedict Evans

A question inevitably followed in antitrust circles: How can a company that makes so little profit on so much revenue be worth so much money? It must be predatory pricing!

Predatory pricing is a rather rare anticompetitive practice because the “predator” runs the risk of bankrupting itself in the process of trying to drive rivals out of business with below-cost pricing. Furthermore, even if a predator successfully clears the field of competition, in developed markets with deep capital markets, keeping out new entrants is extremely unlikely.

Nonetheless, in those rare cases where plaintiffs can demonstrate that a firm actually has a viable scheme to drive competitors from the market with prices that are “too low” and has the ability to recoup its losses once it has cleared the market of those competitors, plaintiffs (including the DOJ) can prevail in court.

In other words, whoa if true.

Khan’s Predatory Pricing Accusation

In 2017, Lina Khan, then a law student at Yale, published “Amazon’s Antitrust Paradox” in a note for the Yale Law Journal and used Evans’ chart as supporting evidence that Amazon was guilty of predatory pricing. In the abstract she says, “Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead.”

But if Amazon is selling below-cost, where does the money come from to finance those losses?

In her article, Khan hinted at two potential explanations: (1) Amazon is using profits from the cloud computing division (AWS) to cross-subsidize losses in the retail division or (2) Amazon is using money from investors to subsidize short-term losses:

Recently, Amazon has started reporting consistent profits, largely due to the success of Amazon Web Services, its cloud computing business. Its North America retail business runs on much thinner margins, and its international retail business still runs at a loss. But for the vast majority of its twenty years in business, losses—not profits—were the norm. Through 2013, Amazon had generated a positive net income in just over half of its financial reporting quarters. Even in quarters in which it did enter the black, its margins were razor-thin, despite astounding growth.

Just as striking as Amazon’s lack of interest in generating profit has been investors’ willingness to back the company. With the exception of a few quarters in 2014, Amazon’s shareholders have poured money in despite the company’s penchant for losses.

Revising predatory pricing doctrine to reflect the economics of platform markets, where firms can sink money for years given unlimited investor backing, would require abandoning the recoupment requirement in cases of below-cost pricing by dominant platforms.

Below-Cost Pricing Not Subsidized by Investors

But neither explanation withstands scrutiny. First, the money is not from investors. Amazon has not raised equity financing since 2003. Nor is it debt financing: The company’s net debt position has been near-zero or negative for its entire history (excluding the Whole Foods acquisition):

Source: Benedict Evans

Amazon does not require new outside financing because it has had positive operating cash flow since 2002:

Notably for a piece of analysis attempting to explain Amazon’s business practices, the text of Khan’s 93-page law review article does not include the word “cash” even once.

Below-Cost Pricing Not Cross-Subsidized by AWS

Source: The Information

As Priya Anand observed in a recent piece for The Information, since Amazon started breaking out AWS in its financials, operating income for the North America retail business has been significantly positive:

But [Khan] underplays its retail profits in the U.S., where the antitrust debate is focused. As the above chart shows, its North America operation has been profitable for years, and its operating income has been on the rise in recent quarters. While its North America retail operation has thinner margins than AWS, it still generated $2.84 billion in operating income last year, which isn’t exactly a rounding error compared to its $4.33 billion in AWS operating income.

Below-Cost Pricing in Retail Also Known as “Loss Leader” Pricing

Okay, so maybe Amazon isn’t using below-cost pricing in aggregate in its retail division. But it still could be using profits from some retail products to cross-subsidize below-cost pricing for other retail products (e.g., diapers), with the intention of driving competitors out of business to capture monopoly profits. This is essentially what Khan claims happened in the Diapers.com (Quidsi) case. But in the retail industry, diapers are explicitly cited as a loss leader that help retailers to develop a customer relationship with mothers in the hopes of selling them a higher volume of products over time. This is exactly what the founders of Diapers.com told Inc Magazine in a 2012 interview (emphasis added):

We saw brick-and-mortar stores, the Wal-Marts and Targets of the world, using these products to build relationships with mom and the end consumer, bringing them into the store and selling them everything else. So we thought that was an interesting model and maybe we could replicate that online. And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

An anticompetitive scheme could be built into such bundling, but in many if not the overwhelming majority of these cases, consumers are the beneficiaries of lower prices and expanded output produced by these arrangements. It’s hard to definitively say whether any given firm that discounts its products is actually pricing below average variable cost (“AVC”) without far more granular accounting ledgers than are typically  maintained. This is part of the reason why these cases can be so hard to prove.

A successful predatory pricing strategy also requires blocking market entry when the predator eventually raises prices. But the Diapers.com case is an explicit example of repeated entry that would defeat recoupment. In an article for the American Enterprise Institute, Jeffrey Eisenach shares the rest of the story following Amazon’s acquisition of Diapers.com:

Amazon’s conduct did not result in a diaper-retailing monopoly. Far from it. According to Khan, Amazon had about 43 percent of online sales in 2016 — compared with Walmart at 23 percent and Target with 18 percent — and since many people still buy diapers at the grocery store, real shares are far lower.

In the end, Quidsi proved to be a bad investment for Amazon: After spending $545 million to buy the firm and operating it as a stand-alone business for more than six years, it announced in April 2017 it was shutting down all of Quidsi’s operations, Diapers.com included. In the meantime, Quidsi’s founders poured the proceeds of the Amazon sale into a new online retailer — Jet.com — which was purchased by Walmart in 2016 for $3.3 billion. Jet.com cofounder Marc Lore now runs Walmart’s e-commerce operations and has said publicly that his goal is to surpass Amazon as the top online retailer.

Sussman’s Predatory Pricing Accusation

Earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. The article, which was written up by David Dayen for In These Times, presents a novel two-part argument for how Amazon might be profitably engaging in predatory pricing without raising prices:

  1. Amazon’s “True” Cash Flow Is Negative

Sussman argues that the company has been inflating its free cash flow numbers by excluding “capital leases.” According to Sussman, “If all of those expenses as detailed in its statements are accounted for, Amazon experienced a negative cash outflow of $1.461 billion in 2017.” Even though it’s not dispositive of predatory pricing on its own, Sussman believes that a negative free cash flow implies the company has been selling below-cost to gain market share.

2. Amazon Recoups Losses By Lowering AVC, Not By Raising Prices

Instead of raising prices to recoup losses from pricing below-cost, Sussman argues that Amazon flies under the antitrust radar by keeping consumer prices low and progressively decreasing AVC, ostensibly through using its monopsony power to offload costs on suppliers and partners (although this point is not fully explored in his piece).

But Sussman’s argument contains errors in both legal reasoning as well as its underlying empirical assumptions.

Below-cost pricing?

While there are many different ways to calculate the “cost” of a product or service, generally speaking, “below-cost pricing” means the price is less than marginal cost or AVC. Typically, courts tend to rely on AVC when dealing with predatory pricing cases. And as Herbert Hovenkamp has noted, proving that a price falls below the AVC is exceedingly difficult, particularly when dealing with firms in dynamic markets that sell a number of differentiated but complementary goods or services. Amazon, the focus of Sussman’s article, is a useful example here.

When products are complements, or can otherwise be bundled, firms may also be able to offer discounts that are unprofitable when selling single items. In business this is known as the “razor and blades model” (i.e., sell the razor handle below-cost one time and recoup losses on future sales of blades — although it’s not clear if this ever actually happens). Printer manufacturers are also an oft-cited example here, where printers are often sold below AVC in the expectation that the profits will be realized on the ongoing sale of ink. Amazon’s Kindle functions similarly: Amazon sells the Kindle around its AVC, ostensibly on the belief that it will realize a profit on selling e-books in the Kindle store.

Yet, even ignoring this common and broadly inoffensive practice, Sussman’s argument is odd. In essence, he claims that Amazon is concealing some of its costs in the form of capital leases in an effort to conceal its below-AVC pricing while it works to simultaneously lower its real AVC below the prices it charges consumers. At the end of this process, once its real AVC is actually sufficiently below consumers prices, it will (so the argument goes) be in the position of a monopolist reaping monopoly profits.

The problem with this argument should be immediately apparent. For the moment, let’s ignore the classic recoupment problem where new entrants will be drawn into the market to win some of those monopoly prices based on the new AVC that is possible. The real problem with his logic is that Sussman basically suggests that if Amazon sharply lowers AVC — that is it makes production massively more efficient — and then does not drop prices, they are a “predator.” But by pricing below its AVC in the first place, consumers in essence were given a loan by Amazon — they were able to enjoy what Sussman believes are radically low prices while Amazon works to actually make those prices possible through creating production efficiencies. It seems rather strange to punish a firm for loaning consumers a large measure of wealth. Its doubly odd when you then re-factor the recoupment problem back in: as soon as other firms figure out that a lower AVC is possible, they will enter the market and bid away any monopoly profits from Amazon.

Sussman’s Technical Analysis Is Flawed

While there are issues with Sussman’s general theory of harm, there are also some specific problems with his technical analysis of Amazon’s financial statements.

Capital Leases Are a Fixed Cost

First, capital leases should be not be included in cost calculations for a predatory pricing case because they are fixed — not variable — costs. Again, “below-cost” claims in predatory pricing cases generally use AVC (and sometimes marginal cost) as relevant cost measures.

Capital Leases Are Mostly for Server Farms

Second, the usual story is that Amazon uses its wildly-profitable Amazon Web Services (AWS) division to subsidize predatory pricing in its retail division. But Amazon’s “capital leases” — Sussman’s hidden costs in the free cash flow calculations — are mostly for AWS capital expenditures (i.e., server farms).

According to the most recent annual report: “Property and equipment acquired under capital leases was $5.7 billion, $9.6 billion, and $10.6 billion in 2016, 2017, and 2018, with the increase reflecting investments in support of continued business growth primarily due to investments in technology infrastructure for AWS, which investments we expect to continue over time.”

In other words, any adjustments to the free cash flow numbers for capital leases would make Amazon Web Services appear less profitable, and would not have a large effect on the accounting for Amazon’s retail operation (the only division thus far accused of predatory pricing).

Look at Operating Cash Flow Instead of Free Cash Flow

Again, while cash flow measures cannot prove or disprove the existence of predatory pricing, a positive cash flow measure should make us more skeptical of such accusations. In the retail sector, operating cash flow is the appropriate metric to consider. As shown above, Amazon has had positive (and increasing) operating cash flow since 2002.

Your Theory of Harm Is Also Known as “Investment”

Third, in general, Sussman’s novel predatory pricing theory is indistinguishable from pro-competitive behavior in an industry with high fixed costs. From the abstract (emphasis added):

[N]egative cash flow firm[s] … can achieve greater market share through predatory pricing strategies that involve long-term below average variable cost prices … By charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies, these firms are able to rationalize their predatory pricing practices to investors and shareholders.

“’Charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies” is literally what it means to invest in capex and R&D.

Sussman’s paper presents a clever attempt to work around the doctrinal limitations on predatory pricing. But, if courts seriously adopt an approach like this, they will be putting in place a legal apparatus that quite explicitly focuses on discouraging investment. This is one of the last things we should want antitrust law to be doing.

The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?

It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.

A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”

The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.

The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.

Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.

In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.

Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.

Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.

But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.

Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.

Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.

The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.