[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.]

[This post is authored by Gerard Lloblet, Professor of Economics at CEMFI, and Jorge Padilla, Senior Managing Director at Compass Lexecon. Both have advised SEP holders, and to a lesser extent licensees, in royalty negotiations and antitrust disputes.]

Over the last few years competition authorities in the US and elsewhere have repeatedly warned about the risk of patent hold-up in the licensing of Standard Essential Patents (SEPs). Concerns about such risks were front and center in the recent FTC case against Qualcomm, where the Court ultimately concluded that Qualcomm had used a series of anticompetitive practices to extract unreasonable royalties from implementers. This post evaluates the evidence for such a risk, as well as the countervailing risk of patent hold-out.

In general, hold up may arise when firms negotiate trading terms after they have made costly, relation-specific investments. Since the costs of these investments are sunk when trading terms are negotiated, they are not factored into the agreed terms. As a result, depending on the relative bargaining power of the firms, the investments made by the weaker party may be undercompensated (Williamson, 1979). 

In the context of SEPs, patent hold-up would arise if SEP owners were able to take advantage of the essentiality of their patents to charge excessive royalties to manufacturers of products reading on those patents that made irreversible investments in the standard (see Lemley and Shapiro (2007)). Similarly, in the recent FTC v. Qualcomm ruling, trial judge Lucy Koh concluded that firms may also use commercial strategies (in this case, Qualcomm’s “no license, no chips” policy, refusing to deal with certain parties and demanding exclusivity from others) to extract royalties that depart from the FRAND benchmark.

After years of heated debate, however, there is no consensus about whether patent hold-up actually exists. Some argue that there is no evidence of hold-up in practice. If patent hold-up were a significant problem, manufacturers would anticipate that their investments would be expropriated and would thus decide not to invest in the first place. But end-product manufacturers have invested considerable amounts in standardized technologies (Galetovic et al, 2015). Others claim that while investment is indeed observed, actual investment levels are “necessarily” below those that would be observed in the absence of hold-up. They allege that, since that counterfactual scenario is not observable, it is not surprising that more than fifteen years after the patent hold-up hypothesis was first proposed, empirical evidence of its existence is lacking.

Meanwhile, innovators are concerned about a risk in the opposite direction, the risk of patent hold-out. As Epstein and Noroozi (2018) explain,

By “patent holdout” we mean the converse problem, i.e., that an implementer refuses to negotiate in good faith with an innovator for a license to valid patent(s) that the implementer infringes, and instead forces the innovator to either undertake significant litigation costs and time delays to extract a licensing payment through court order, or else to simply drop the matter because the licensing game is no longer worth the candle.

Patent hold-out, also known as “efficient infringement,” is especially relevant in the standardization context for two reasons. First, SEP owners are oftentimes required to license their patents under Fair, Reasonable and Non-Discriminatory (FRAND) conditions. Particularly when, as occurs in some jurisdictions, innovators are not allowed to request an injunction, they have little or no leverage in trying to require licensees to accept a licensing deal. Secondly, SEP owners typically possess many complementary patents and, therefore, seek to license their portfolio of SEPs at once, since that minimizes transaction costs. Yet, some manufacturers de facto refuse to negotiate in this way and choose to challenge the validity of the SEP portfolio patent-by-patent and/or jurisdiction-by-jurisdiction. This strategy involves large litigation costs and is therefore inefficient. SEP holders claim that this practice is anticompetitive and it also leads to royalties that are too low.

While the concerns of SEP holders seem to have attracted the attention of the leadership of the US DOJ (see, for example, here), some authors have dismissed them as theoretically groundless, empirically immaterial and irrelevant from an antitrust perspective (see here). 

Evidence of patent hold-out from litigation

In an ongoing work, Llobet and Padilla (forthcoming), we analyze the effects of the sequential litigation strategy adopted by some manufacturers and compare its consequences with the simultaneous litigation of the whole portfolio. We show that sequential litigation results in lower royalty payments than simultaneous litigation and may result in under-compensation of innovation and the dissipation of social surplus when litigation costs are high.

The model relies on two basic and realistic assumptions. First, in sequential lawsuits, the result of a trial affects the probability that each party wins the following one. That is, if the manufacturer wins the first trial, it has a higher probability of winning the second, as a first victory may uncover information about the validity of other patents that relate to the same type of innovation, which will be less likely to be upheld in court. Second, the impact of a validity challenge on royalty payments is asymmetric: they are reduced to zero if the patent is found to be invalid but are not increased if it is found valid (and infringed).

Our results indicate that these features of the legal system can be strategically used by the manufacturer. The intuition is as follows. Suppose that the innovator sets a royalty rate for each patent for which, in the simultaneous trial case, the manufacturer would be indifferent between settling and litigating. Under sequential litigation, however, the manufacturer might be willing to challenge a patent because of the gain in a future trial. This is due to the asymmetric effects that winning or losing the second trial has on the royalty rate that this firm will have to pay. In particular, if the manufacturer wins the first trial, so that the first patent is invalidated, its probability of winning the second one increases, which means that the innovator is likely to settle for a lower royalty rate for the second patent or see both patents invalidated in court. In the opposite case, if the innovator wins the first trial, so that the second is also likely to be unfavorable to the manufacturer, the latter always has the option to pay up the original royalty rate and avoid the second trial. In other words, the possibility for the manufacturer to negotiate the royalty rate downwards after a victory, without the risk of it being increased in case of a defeat, fosters sequential litigation and results in lower royalties than the simultaneous litigation of all patents would produce. 

This mechanism, while being applicable to any portfolio that includes patents the validity of which is related, becomes more significant in the context of SEPs for two reasons. The first is the difficulty of innovators to adjust their royalties upwards after the first successful trial, as it might be considered a breach of their FRAND commitments. The second is that, following recent competition law litigation in the EU and other jurisdictions, SEP owners are restricted in their ability to seek (preliminary) injunctions even in the case of willful infringement. Our analysis demonstrates that the threat of injunction mitigates, though it is unlikely to eliminate completely, the incentive to litigate sequentially and, therefore, excessively (i.e. even when such litigation reduces social welfare).

We also find a second motivation for excessive litigation: business stealing. Manufacturers litigate excessively in order to avoid payment and thus achieve a valuable cost advantage over their competitors. They prefer to litigate even when litigation costs are so large that it would be preferable for society to avoid litigation because their royalty burden is reduced both in absolute terms and relative to the royalty burden for its rivals (while it does not go up if the patents are found valid). This business stealing incentive will result in the under-compensation of innovators, as above, but importantly it may also result in the anticompetitive foreclosure of more efficient competitors.

Consider, for example, a scenario in which a large firm with the ability to fund protracted litigation efforts competes in a downstream market with a competitive fringe, comprising small firms for which litigation is not an option. In this scenario, the large manufacturer may choose to litigate to force the innovator to settle on a low royalty. The large manufacturer exploits the asymmetry with its defenseless small rivals to reduce its IP costs. In some jurisdictions it may also exploit yet another asymmetry in the legal system to achieve an even larger cost advantage. If both the large manufacturer and the innovator choose to litigate and the former wins, the patent is invalidated, and the large manufacturer avoids paying royalties altogether. Whether this confers a comparative advantage on the large manufacturer depends on whether the invalidation results in the immediate termination of all other existing licenses or not.

Our work thus shows that patent hold-out concerns are both theoretically cogent and have non-trivial antitrust implications. Whether such concerns merit intervention is an empirical matter. While reviewing that evidence is outside the scope of our work, our own litigation experience suggests that patent hold-out should be taken seriously.

[TOTM: The following is the third  in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.]

[This post is authored by Jonathan M. Barnett, Torrey H. Webb Professor of Law at the University of Southern California Gould School of Law.]

There is little doubt that the decision in May 2019 by the Northern District of California in FTC v. Qualcomm is of historical importance. Unless reversed or modified on appeal, the decision would require that the lead innovator behind 3G and 4G smartphone technology renegotiate hundreds of existing licenses with device producers and offer new licenses to any interested chipmakers.

The court’s sweeping order caps off a global campaign by implementers to re-engineer the property-rights infrastructure of the wireless markets. Those efforts have deployed the instruments of antitrust and patent law to override existing licensing arrangements and thereby reduce the input costs borne by device producers in the downstream market. This has occurred both directly, in arguments made by those firms in antitrust and patent litigation or through the filing of amicus briefs, or indirectly by advocating that regulators bring antitrust actions against IP licensors.

Whether or not FTC v. Qualcomm is correctly decided largely depends on whether or not downstream firms’ interest in minimizing the costs of obtaining technology inputs from upstream R&D specialists aligns with the public interest in preserving dynamically efficient innovation markets. As I discuss below, there are three reasons to believe those interests are not aligned in this case. If so, the court’s order would simply engineer a wealth transfer from firms that have led innovation in wireless markets to producers that have borne few of the costs and risks involved in doing so. Members of the former group each exhibits R&D intensities (R&D expenditures as a percentage of sales) in the high teens to low twenties; the latter, approximately five percent. Of greater concern, the court’s upending of long-established licensing arrangements endangers business models that monetize R&D by licensing technology to a large pool of device producers (see Qualcomm), rather than earning returns through self-contained hardware and software ecosystems (see Apple). There is no apparent antitrust rationale for picking and choosing among these business models in innovation markets.

Reason #1: FRAND is a Two-Sided Deal

To fully appreciate the recent litigations involving the FTC and Apple on the one hand, and Qualcomm on the other hand, it is necessary to return to the origins of modern wireless markets.

Starting in the late 1980s, various firms were engaged in the launch of the GSM wireless network in Western Europe. At that time, each European telecom market typically consisted of a national monopoly carrier and a favored group of local equipment suppliers. The GSM project, which envisioned a trans-national wireless communications market, challenged this model. In particular, the national carrier and equipment monopolies were threatened by the fact that the GSM standard relied in part on patented technology held by an outside innovator—namely, Motorola. As I describe in a forthcoming publication, the “FRAND” (fair, reasonable and nondiscriminatory) principles that today govern the licensing of standard-essential patents in wireless markets emerged from a negotiation between, on the one hand, carriers and producers who sought a royalty cap and, on the other hand, a technology innovator that sought to preserve its licensing freedom going forward.

This negotiation history is important. Any informed discussion of the meaning of FRAND must recognize that this principle was adopted as something akin to a “good faith” contractual term designed to promote two objectives:

  1. Protect downstream adopters from holdup tactics by upstream innovators; and
  2. enable upstream innovators to enjoy an appreciable portion of the value generated by sales in the consumer market.

Any interpretation of FRAND that does not meet these conditions will induce upstream firms to reduce R&D investment, limit participation in standard-setting activities, or vertically integrate forward to capture directly a return on R&D dollars.

Reason #2: No Evidence of Actual Harm

In the December 2018 appellate court proceedings in which the Department of Justice unsuccessfully challenged the AT&T/Time-Warner merger, Judge David Sentelle of the D.C. Circuit said to the government’s legal counsel:

If you’re going to rely on an economic model, you have to rely on it with quantification. The bare theorem . . . doesn’t prove anything in a particular case.

The government could not credibly reply to that query in the AT&T case and, if appropriately challenged, could not do so in this case.

Far from being a market that calls out for federal antitrust intervention, the smartphone market offers what appears to be an almost textbook case of dynamic efficiency. For over a decade, implementers, along with sympathetic regulators and commentators, have argued that the market suffers (or, in a variation, will imminently suffer) from inflated prices, reduced output and delayed innovation as a result of “patent hold-up” and “royalty stacking” by opportunistic patent owners. In the course of several decades that have passed since the launch of the GSM network, none of these predictions have yet to materialize. To the contrary. The market has exhibited expanding output, declining prices (adjusted for increased functionality), constant innovation, and regular entry into the production market. Multiple empirical studies (e.g. this, this and this) have found that device producers bear on average an aggregate royalty burden in the single to mid-digits.

This hardly seems like a market in which producers and consumers are being “victimized” by what the Northern District of California calls “unreasonably high” licensing fees (compared to an unspecified, and inherently unspecifiable, dynamically efficient benchmark). Rather, it seems more likely that device producers—many of whom provided the testimony which the court referenced in concluding that royalty rates were “unreasonably high”—would simply prefer to pay an even lower fee to R&D input suppliers (with no assurance that any of the cost-savings would flow to consumers).

Reason #3: The “License as Tax” Fallacy

The rhetorical centerpiece of the FTC’s brief relied on an analogy between the patent license fees earned by Qualcomm in the downstream device market and the tax that everyone pays to the IRS. The court’s opinion wholeheartedly adopted this narrative, determining that Qualcomm imposes a tax (or, as Judge Koh terms it, a “surcharge”) on the smartphone market by demanding a fee from OEMs for use of its patent portfolio whether or not the OEM purchases chipsets from Qualcomm or another firm. The tax analogy is fundamentally incomplete, both in general and in this case in particular.

It is true that much of the economic literature applies monopoly taxation models to assess the deadweight losses attributed to patents. While this analogy facilitates analytical tractability, a “zero-sum” approach to patent licensing overlooks the value-creating “multiplier” effect that licensing generates in real-world markets. Specifically, broad-based downstream licensing by upstream patent owners—something to which SEP owners commit under FRAND principles—ensures that device makers can obtain the necessary technology inputs and, in doing so, facilitates entry by producers that do not have robust R&D capacities. All of that ultimately generates gains for consumers.

This “positive-sum” multiplier effect appears to be at work in the smartphone market. Far from acting as a tax, Qualcomm’s licensing policies appear to have promoted entry into the smartphone market, which has experienced fairly robust turnover in market leadership. While Apple and Samsung may currently dominate the U.S. market, they face intense competition globally from Chinese firms such as Huawei, Xiaomi and Oppo. That competitive threat is real. As of 2007, Nokia and Blackberry were the overwhelming market leaders and appeared to be indomitable. Yet neither can be found in the market today. That intense “gale of competition”, sustained by the fact that any downstream producer can access the required technology inputs upon payment of licensing fees to upstream innovators, challenges the view that Qualcomm’s licensing practices have somehow restrained market growth.

Concluding Thoughts: Antitrust Flashback

When competitive harms are so unclear (and competitive gains so evident), modern antitrust law sensibly prescribes forbearance. A famous “bad case” from antitrust history shows why.

In 1953, the Department of Justice won an antitrust suit against United Shoe Machinery Corporation, which had led innovation in shoe manufacturing equipment and subsequently dominated that market. United Shoe’s purportedly anti-competitive practices included a lease-only policy that incorporated training and repair services at no incremental charge. The court found this to be a coercive tie that preserved United Shoe’s dominant position, despite the absence of any evidence of competitive harm. Scholars have subsequently shown (e.g. this and  this; see also this) that the court did not adequately consider (at least) two efficiency explanations: (1) lease-only policies were widespread in the market because this facilitated access by smaller capital-constrained manufacturers, and (2) tying support services to equipment enabled United Shoe to avoid free-riding on its training services by other equipment suppliers. In retrospect, courts relied on a mere possibility theorem ultimately to order the break-up of a technological pioneer, with potentially adverse consequences for manufacturers that relied on its R&D efforts.

The court’s decision in FTC v. Qualcomm is a flashback to cases like United Shoe in which courts found liability and imposed dramatic remedies with little economic inquiry into competitive harm. It has become fashionable to assert that current antitrust law is too cautious in finding liability. Yet there is a sound reason why, outside price-fixing, courts generally insist that theories of antitrust liability include compelling evidence of competitive harm. Antitrust remedies are strong medicine and should be administered with caution. If courts and regulators do not zealously scrutinize the factual support for antitrust claims, then they are vulnerable to capture by private entities whose business objectives may depart from the public interest in competitive markets. While no antitrust fact-pattern is free from doubt, over two decades of market performance strongly favor the view that long-standing licensing arrangements in the smartphone market have resulted in substantial net welfare gains for consumers. If so, the prudent course of action is simply to leave the market alone.

[This post is the first in an ongoing symposium on “Breaking up Big Tech” that will feature analysis and opinion from various perspectives.]

[This post is authored by Randal C. Picker, James Parker Hall Distinguished Service Professor of Law at The University of Chicago Law School]

The European Commission just announced that it is investigating Amazon. The Commission’s concern is that Amazon is simultaneously acting as a ref and player: Amazon sells goods directly as a first party but also operates a platform on which it hosts goods sold by third parties (resellers) and those goods sometimes compete. And, next step, Amazon is said to choose which markets to enter as a private-label seller at least in part by utilizing information it gleans from the third-party sales it hosts.

Assuming there is a problem …

Were Amazon’s activities thought to be a problem, the natural remedies, whether through antitrust or more direct sector, industry-specific regulation, might be to bar Amazon from both being a direct seller and a platform. India has already passed a statute that effectuates some of those results, though it seems targeted at non-domestic companies.

A broad regulation that barred Amazon from being simultaneously a seller of first-party inventory and of third-party inventory presumably would lead to a dissolution of the company into separate companies in each of those businesses. A different remedy—a classic that goes back at least as far in the United States as the 1887 Commerce Act—would be to impose some sort of nondiscrimination obligation on Amazon and perhaps to couple that with some sort of business-line restriction—a quarantine—that would bar Amazon from entering markets though private labels.

But is there a problem?

Private labels have been around a long time and large retailers have faced buy-vs.-build decisions along the way. Large, sophisticated retailers like A&P in a different era and Walmart and Costco today, just to choose two examples, are constantly rebalancing their inventory between that which they buy from third parties and that which they produce for themselves. As I discuss below, being a platform matters for the buy-vs.-build decision, but it is far from clear that being both a store and a platform simultaneously matters importantly for how we should look at these issues.

Of course, when Amazon opened for business in July 1995 it didn’t quite face these issues immediately. Amazon sold books—it billed itself as “Earth’s Biggest Bookstore”—but there is no private label possibility for books, no effort to substitute into just selling say “The Wit and Wisdom of Jeff Bezos.” You could of course build an ebooks platform—call that a Kindle—but that would be a decade or so down the road. But as Amazon expanded into more pedestrian goods, it would, like other retailers, naturally make decisions about which inventory to source internally and which to buy from third parties.

In September 1999, Amazon opened up what was being described as an online mall. Amazon called it zShops and the idea was clear: many customers came to Amazon to buy things that Amazon wasn’t offering and Amazon would bring that audience and a variety of transaction services to third parties. Third parties would in turn pay Amazon a monthly fee and a variety of transaction fees. Amazon CEO Jeff Bezos noted (as reported in The Wall Street Journal) that those prices had been set in a way to make Amazon generally “neutral” in choosing whether to enter a market through first-party inventory or through third-party inventory.

Note that a traditional retailer and the original Amazon faced a natural question which was which goods to carry in inventory? When Amazon opened its platform, Amazon changed powerfully the question of which goods to stock. Even a Walmart Supercenter has limited physical shelf space and has to take something off of the shelves to stock a new product. By becoming a platform, Amazon largely outsourced the product selection and shelf space allocation question to third parties. The new Amazon resellers would get access to Amazon’s substantial customer base—its audience—and to a variety of transactional services that Amazon would provide them.

An online retailer has some real informational advantages over physical stores, as the online retailer sees every product that customers search for. It is much harder, though not impossible, for a physical store to capture that information. But as Amazon became a platform it would no longer just observe search queries for goods but it would see actual sales by the resellers. And a physical store isn’t a platform in the way that Amazon is as the physical store is constrained by limited shelf space. But the real target here is the marginal information Amazon gets from third-party sales relative to what it would see from product searches at Amazon, its own first-party sales and from clicks on the growing amount of advertising it sells on its website.

All of that might matter for running product and inventory experiments and the corresponding pace of learning what goods customers want at what price. A physical store has to remove some item from its shelves to experiment with a new item and has to buy the item to stock it, though how much of a risk it is taking there will depend on whether the retailer can return unsold goods to the inventory supplier. A platform retailer like Amazon doesn’t have to make those tradeoffs and an online mall could offer almost an infinite inventory of items. A store or product ready for every possible search.

A possible strategy

All of this suggests a possible business strategy for a platform: let third parties run inventory experiments where the platform gets to see the results. Products that don’t sell are failed experiments and the platform doesn’t enter those markets. But when a third-party sells a product in real numbers, start selling that product as first-party inventory. Amazon then would face buy vs. build on that and that should make clear that the private brands question is distinct from the question of whether Amazon can leverage third-party reseller information to their detriment. It can certainly do just that by buying competing goods from a wholesaler and stocking that item as first-party Amazon inventory.

If Amazon is playing this strategy, it seems to be playing it slowly and poorly. Amazon CEO Jeff Bezos includes a letter each year to open Amazon’s annual report to shareholders. In the 2018 letter, Bezos opened by noting that “[s]omething strange and remarkable has happened over the last 20 years.” What was that? In 1999, the relevant number was 3%; five years later, in 2004, it was 25%, then 31% in 2009, 49% in 2014 and 58% in 2018. These were the percentage of physical gross merchandise sales by third-party sellers through Amazon. In 1993, 97% of Amazon’s sales were of its own first-party inventory but the percentage of third-party sales had steadily risen over 20 years and over the last four years of that period, third-party inventory sales exceeded Amazon’s own internal sales. As Bezos noted, Amazon’s first-party sales had grown dramatically—a 25% annual compound growth rate over that period—but in 2018, total third-party sales revenues were $160 billion while Amazon’s own first-party sales were at $117 billion. Bezos had a perspective on all of that—“Third-party sellers are kicking our first party butt. Badly.”—but if you believed the original vision behind creating the Amazon platform, Amazon should be indifferent between first-party sales and third-party sales, as long as all of that happens at Amazon.

This isn’t new

Given all of that, it isn’t crystal clear to me why Amazon gets as much attention as it does. The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws.

As suggested above, I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

In conclusion

There is a great deal more to say about a company as complex as Amazon, but two thoughts in closing. One story here is that Amazon has built a superior business model in combining first-party and third-party inventory sales and that is exactly the kind of business model innovation that we should applaud. Amazon has enjoyed remarkable growth but Walmart is still vastly larger than Amazon (ballpark numbers for 2018 are roughly $510 billion in net sales for Walmart vs. roughly $233 billion for Amazon – including all 3rd party sales, as well as Amazon Web Services). The second story is the remarkable growth of sales by resellers at Amazon.

If Amazon is creating private-label goods based on information it sees on its platform, nothing suggests that it is doing that particularly rapidly. And even if it is entering those markets, it still might do that were we to break up Amazon and separate the platform piece of Amazon (call it Amazon Platform) from the original first-party version of Amazon (say Amazon Classic) as traditional retailers have for a very, very long time been making buy-vs.-build decisions on their first-party inventory and using their internal information to make those decisions.

[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...

After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.

I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)

As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”

This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.

Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.

In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).

The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.

The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.

What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.

Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.

As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.

Last year, real estate developer Alastair Mactaggart spent nearly $3.5 million to put a privacy law on the ballot in California’s November election. He then negotiated a deal with state lawmakers to withdraw the ballot initiative if they passed their own privacy bill. That law — the California Consumer Privacy Act (CCPA) — was enacted after only seven days of drafting and amending. CCPA will go into effect six months from today.

According to Mactaggart, it all began when he spoke with a Google engineer and was shocked to learn how much personal data the company collected. This revelation motivated him to find out exactly how much of his data Google had. Perplexingly, instead of using Google’s freely available transparency tools, Mactaggart decided to spend millions to pressure the state legislature into passing new privacy regulation.

The law has six consumer rights, including the right to know; the right of data portability; the right to deletion; the right to opt-out of data sales; the right to not be discriminated against as a user; and a private right of action for data breaches.

So, what are the law’s prospects when it goes into effect next year? Here are ten reasons why CCPA is going to be a dumpster fire.

1. CCPA compliance costs will be astronomical

“TrustArc commissioned a survey of the readiness of 250 firms serving California from a range of industries and company size in February 2019. It reports that 71 percent of the respondents expect to spend at least six figures in CCPA-related privacy compliance expenses in 2019 — and 19 percent expect to spend over $1 million. Notably, if CCPA were in effect today, 86 percent of firms would not be ready. An estimated half a million firms are liable under the CCPA, most of which are small- to medium-sized businesses. If all eligible firms paid only $100,000, the upfront cost would already be $50 billion. This is in addition to lost advertising revenue, which could total as much as $60 billion annually. (AEI / Roslyn Layton)

2. CCPA will be good for Facebook and Google (and bad for small ad networks)

“It’s as if the privacy activists labored to manufacture a fearsome cannon with which to subdue giants like Facebook and Google, loaded it with a scattershot set of legal restrictions, aimed it at the entire ads ecosystem, and fired it with much commotion. When the smoke cleared, the astonished activists found they’d hit only their small opponents, leaving the giants unharmed. Meanwhile, a grinning Facebook stared back at the activists and their mighty cannon, the weapon that they had slyly helped to design.” (Wired / Antonio García Martínez)

“Facebook and Google ultimately are not constrained as much by regulation as by users. The first-party relationship with users that allows these companies relative freedom under privacy laws comes with the burden of keeping those users engaged and returning to the app, despite privacy concerns.” (Wired / Antonio García Martínez)

3. CCPA will enable free-riding by users who opt out of data sharing

“[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, CCPA enables free riders—individuals that opt out but still expect the same services and price—and undercuts access to free content and services. Someone must pay for free services, and if individuals opt out of their end of the bargain—by allowing companies to use their data—they make others pay more, either directly or indirectly with lower quality services. CCPA tries to compensate for the drastic reduction in the effectiveness of online advertising, an important source of income for digital media companies, by forcing businesses to offer services even though they cannot effectively generate revenue from users.” (ITIF / Daniel Castro and Alan McQuinn)

4. CCPA is potentially unconstitutional as-written

“[T]he law potentially applies to any business throughout the globe that has/gets personal information about California residents the moment the business takes the first dollar from a California resident. Furthermore, the law applies to some corporate affiliates (parent, subsidiary, or commonly owned companies) of California businesses, even if those affiliates have no other ties to California. The law’s purported application to businesses not physically located in California raises potentially significant dormant Commerce Clause and other Constitutional problems.” (Eric Goldman)

5. GDPR compliance programs cannot be recycled for CCPA

“[C]ompanies cannot just expand the coverage of their EU GDPR compliance measures to residents of California. For example, the California Consumer Privacy Act:

  • Prescribes disclosures, communication channels (including toll-free phone numbers) and other concrete measures that are not required to comply with the EU GDPR.
  • Contains a broader definition of “personal data” and also covers information pertaining to households and devices.
  • Establishes broad rights for California residents to direct deletion of data, with differing exceptions than those available under GDPR.
  • Establishes broad rights to access personal data without certain exceptions available under GDPR (e.g., disclosures that would implicate the privacy interests of third parties).
  • Imposes more rigid restrictions on data sharing for commercial purposes.”

(IAPP / Lothar Determann)

6. CCPA will be a burden on small- and medium-sized businesses

“The law applies to businesses operating in California if they generate an annual gross revenue of $25 million or more, if they annually receive or share personal information of 50,000 California residents or more, or if they derive at least 50 percent of their annual revenue by “selling the personal information” of California residents. In effect, this means that businesses with websites that receive traffic from an average of 137 unique Californian IP addresses per day could be subject to the new rules.” (ITIF / Daniel Castro and Alan McQuinn)

CCPA “will apply to more than 500,000 U.S. companies, the vast majority of which are small- to medium-sized enterprises.” (IAPP / Rita Heimes and Sam Pfeifle)

7. CCPA’s definition of “personal information” is extremely over-inclusive

“CCPA likely includes gender information in the “personal information” definition because it is “capable of being associated with” a particular consumer when combined with other datasets. We can extend this logic to pretty much every type or class of data, all of which become re-identifiable when combined with enough other datasets. Thus, all data related to individuals (consumers or employees) in a business’ possession probably qualifies as “personal information.” (Eric Goldman)

“The definition of “personal information” includes “household” information, which is particularly problematic. A “household” includes the consumer and other co-habitants, which means that a person’s “personal information” oxymoronically includes information about other people. These people’s interests may diverge, such as with separating spouses, multiple generations under the same roof, and roommates. Thus, giving a consumer rights to access, delete, or port “household” information affects other people’s information, which may violate their expectations and create major security and privacy risks.” (Eric Goldman)

8. CCPA penalties might become a source for revenue generation

“According to the new Cal. Civ. Code §1798.150, companies that become victims of data theft or other data security breaches can be ordered in civil class action lawsuits to pay statutory damages between $100 to $750 per California resident and incident, or actual damages, whichever is greater, and any other relief a court deems proper, subject to an option of the California Attorney General’s Office to prosecute the company instead of allowing civil suits to be brought against it.” (IAPP / Lothar Determann)

“According to the new Cal. Civ. Code §1798.155, companies can be ordered in a civil action brought by the California Attorney General’s Office to pay penalties of up to $7,500 per intentional violation of any provision of the California Consumer Privacy Act, or, for unintentional violations, if the company fails to cure the unintentional violation within 30 days of notice, $2,500 per violation under Section 17206 of the California Business and Professions Code. Twenty percent of such penalties collected by the State of California shall be allocated to a new “Consumer Privacy Fund” to fund enforcement.” (IAPP / Lothar Determann)

“[T]he Attorney General, through its support of SB 561, is seeking to remove this provision, known as a “30-day cure,” arguing that it would be able to secure more civil penalties and thus increase enforcement. Specifically, the Attorney General has said it needs to raise $57.5 million in civil penalties to cover the cost of CCPA enforcement.”  (ITIF / Daniel Castro and Alan McQuinn)

9. CCPA is inconsistent with existing privacy laws

“California has led the United States and often the world in codifying privacy protections, enacting the first laws requiring notification of data security breaches (2002) and website privacy policies (2004). In the operative section of the new law, however, the California Consumer Privacy Act’s drafters did not address any overlap or inconsistencies between the new law and any of California’s existing privacy laws, perhaps due to the rushed legislative process, perhaps due to limitations on the ability to negotiate with the proponents of the Initiative. Instead, the new Cal. Civ. Code §1798.175 prescribes that in case of any conflicts with California laws, the law that affords the greatest privacy protections shall control.” (IAPP / Lothar Determann)

10. CCPA will need to be amended, creating uncertainty for businesses

As of now, a dozen bills amending CCPA have passed the California Assembly and continue to wind their way through the legislative process. California lawmakers have until September 13th to make any final changes to the law before it goes into effect. In the meantime, businesses have to begin compliance preparations under a cloud of uncertainty about what the says today — or what it might even say in the future.

More than a century of bad news

Bill Gates recently tweeted the image below, commenting that he is “always amazed by the disconnect between what we see in the news and the reality of the world around us.”

https://pbs.twimg.com/media/D8zWfENUYAAvK5I.png

Of course, this chart and Gates’s observation are nothing new – there has long been an accuracy gap between what the news covers (and therefore what Americans believe is important) and what is actually important. As discussed in one academic article on the subject:

The line between journalism and entertainment is dissolving even within traditional news formats. [One] NBC executive [] decreed that every news story should “display the attributes of fiction, of drama. It should have structure and conflict, problem and denouement, rising action and falling action, a beginning, a middle and an end.” … This has happened both in broadcast and print journalism. … Roger Ailes … explains this phenomenon with an Orchestra Pit Theory: “If you have two guys on a stage and one guy says, ‘I have a solution to the Middle East problem,’ and the other guy falls in the orchestra pit, who do you think is going to be on the evening news?”

Matters of policy get increasingly short shrift. In 1968, the network newscasts generally showed presidential candidates speaking, and on the average a candidate was shown speaking uninterrupted for forty-two seconds. Over the next twenty years, these sound bites had shrunk to an average of less than ten seconds. This phenomenon is by no means unique to broadcast journalism; there has been a parallel decline in substance in print journalism as well. …

The fusing of news and entertainment is not accidental. “I make no bones about it—we have to be entertaining because we compete with entertainment options as well as other news stories,” says the general manager of a Florida TV station that is famous, or infamous, for boosting the ratings of local newscasts through a relentless focus on stories involving crime and calamity, all of which are presented in a hyperdramatic tone (the so-called “If It Bleeds, It Leads” format). There was a time when news programs were content to compete with other news programs, and networks did not expect news divisions to be profit centers, but those days are over.

That excerpt feels like it could have been written today. It was not: it was published in 1996. The “if it bleeds, it leads” trope is often attributed to a 1989 New York magazine article – and once introduced into the popular vernacular it grew quickly in popularity:

Of course, the idea that the media sensationalizes its reporting is not a novel observation. “If it bleeds, it leads” is just the late-20th century term for what had been “sex sells” – and the idea of yellow journalism before then. And, of course, “if it bleeds” is the precursor to our more modern equivalent of “clickbait.”

The debate about how to save the press from Google and Facebook … is the wrong debate to have

We are in the midst of a debate about how to save the press in the digital age. The House Judiciary Committee recently held a hearing on the relationship between online platforms and the press; and the Australian Competition & Consumer Commission recently released a preliminary report on the same topic.

In general, these discussions focus on concerns that advertising dollars have shifted from analog-era media in the 20th century to digital platforms in the 21st century – leaving the traditional media underfunded and unable to do its job. More specifically, competition authorities are being urged (by the press) to look at this through the lens of antitrust, arguing that Google and Facebook are the dominant two digital advertising platforms and have used their market power to harm the traditional media.

I have previously explained that this is bunk; as has John Yun, critiquing current proposals. I won’t rehash those arguments here, beyond noting that traditional media’s revenues have been falling since the advent of the Internet – not since the advent of Google or Facebook. The problem that the traditional media face is not that monopoly platforms are engaging in conduct that is harmful to them – it is that the Internet is better both as an advertising and information-distribution platform such that both advertisers and information consumers have migrated to digital platforms (and away from traditional news media).

This is not to say that digital platforms are capable of, or well-suited to, the production and distribution of the high-quality news and information content that we have historically relied on the traditional media to produce. Yet, contemporary discussions about whether traditional news media can survive in an era where ad revenue accrues primarily to large digital platforms have been surprisingly quiet on the question of the quality of content produced by the traditional media.

Actually, that’s not quite true. First, as indicated by the chart tweeted by Gates, digital platforms may be providing consumers with information that is more relevant to them.

Second, and more important, media advocates argue that without the ad revenue that has been diverted (by advertisers, not by digital platforms) to firms like Google and Facebook they lack the resources to produce high quality content. But that assumes that they would produce high quality content if they had access to those resources. As Gates’s chart – and the last century of news production – demonstrates, that is an ill-supported claim. History suggests that, left to its own devices and not constrained for resources by competition from digital platforms, the traditional media produces significant amounts of clickbait.

It’s all about the Benjamins

Among critics of the digital platforms, there is a line of argument that the advertising-based business model is the original sin of the digital economy. The ad-based business model corrupts digital platforms and turns them against their users – the user, that is, becomes the product in the surveillance capitalism state. We would all be much better off, the argument goes, if the platforms operated under subscription- or micropayment-based business models.

It is noteworthy that press advocates eschew this line of argument. Their beef with the platforms is that they have “stolen” the ad revenue that rightfully belongs to the traditional media. The ad revenue, of course, that is the driver behind clickbait, “if it bleeds it leads,” “sex sells,” and yellow journalism. The original sin of advertising-based business models is not original to digital platforms – theirs is just an evolution of the model perfected by the traditional media.

I am a believer in the importance of the press – and, for that matter, for the efficacy of ad-based business models. But more than a hundred years of experience makes clear that mixing the two into the hybrid bastard that is infotainment should prompt concern and discussion about the business model of the traditional press (and, indeed, for most of the past 30 years or so it has done so).

When it comes to “saving the press” the discussion ought not be about how to restore traditional media to its pre-Facebook glory days of the early aughts, or even its pre-modern Internet gold age of the late 1980s. By that point, the media was well along the slippery slope to where it is today. We desperately need a strong, competitive market for news and information. We should use the crisis that that market currently is in to discuss solutions for the future, not how to preserve the past.

Thomas Wollmann has a new paper — “Stealth Consolidation: Evidence from an Amendment to the Hart-Scott-Rodino Act” — in American Economic Review: Insights this month. Greg Ip included this research in an article for the WSJ in which he claims that “competition has declined and corporate concentration risen through acquisitions often too small to draw the scrutiny of antitrust watchdogs.” In other words, “stealth consolidation”.

Wollmann’s study uses a difference-in-differences approach to examine the effect on merger activity of the 2001 amendment to the Hart-Scott-Rodino (HSR) Antitrust Improvements Act of 1976 (15 U.S.C. 18a). The amendment abruptly increased the pre-merger notification threshold from $15 million to $50 million in deal size. Strictly on those terms, the paper shows that raising the pre-merger notification threshold increased merger activity.

However, claims about “stealth consolidation” are controversial because they connote nefarious intentions and anticompetitive effects. As Wollmann admits in the paper, due to data limitations, he is unable to show that the new mergers are in fact anticompetitive or that the social costs of these mergers exceed the social benefits. Therefore, more research is needed to determine the optimal threshold for pre-merger notification rules, and claiming that harmful “stealth consolidation” is occurring is currently unwarranted.

Background: The “Unscrambling the Egg” Problem

In general, it is more difficult to unwind a consummated anticompetitive merger than it is to block a prospective anticompetitive merger. As Wollmann notes, for example, “El Paso Natural Gas Co. acquired its only potential rival in a market” and “the government’s challenge lasted 17 years and involved seven trips to the Supreme Court.”

Rolling back an anticompetitive merger is so difficult that it came to be known as “unscrambling the egg.” As William J. Baer, a former director of the Bureau of Competition at the FTC, described it, “there were strong incentives for speedily and surreptitiously consummating suspect mergers and then protracting the ensuing litigation” prior to the implementation of a pre-merger notification rule. These so-called “midnight mergers” were intended to avoid drawing antitrust scrutiny.

In response to this problem, Congress passed the Hart–Scott–Rodino Antitrust Improvements Act of 1976, which required companies to notify antitrust authorities of impending mergers if they exceeded certain size thresholds.

2001 Hart–Scott–Rodino Amendment

In 2001, Congress amended the HSR Act and effectively raised the threshold for premerger notification from $15 million in acquired firm assets to $50 million. This sudden and dramatic change created an opportunity to use a difference-in-differences technique to study the relationship between filing an HSR notification and merger activity.

According to Wollmann, here’s what notifications look like for never-exempt mergers (>$50M):

And here’s what notifications for newly-exempt ($15M < X < $50M) mergers look like:

So what does that mean for merger investigations? Here is the number of investigations into never-exempt mergers:

We see a pretty consistent relationship between number of mergers and number of investigations. More mergers means more investigations.  

How about for newly-exempt mergers?

Here, investigations go to zero while merger activity remains relatively stable. In other words, it appears that some mergers that would have been investigated had they required an HSR notification were not investigated.

Wollmann then uses four-digit SIC code industries to sort mergers into horizontal and non-horizontal categories. Here are never-exempt mergers:

He finds that almost all of the increase in merger activity (relative to the counterfactual in which the notification threshold were unchanged) is driven by horizontal mergers. And here are newly-exempt mergers:

Policy Implications & Limitations

The charts show a stark change in investigations and merger activity. The difference-in-differences methodology is solid and the author addresses some potential confounding variables (such as presidential elections). However, the paper leaves the broader implications for public policy unanswered.

Furthermore, given the limits of the data in this analysis, it’s not possible for this approach to explain competitive effects in the relevant antitrust markets, for three reasons:

Four-digit SIC code industries are not antitrust markets

Wollmann chose to classify mergers “as horizontal or non-horizontal based on whether or not the target and acquirer operate in the same four-digit SIC code industry, which is common convention.” But as Werden & Froeb (2018) notes, four-digit SIC code industries are orders of magnitude too large in most cases to be useful for antitrust analysis:

The evidence from cartel cases focused on indictments from 1970–80. Because the Justice Department prosecuted many local cartels, for 52 of the 80 indictments examined, the Commerce Quotient was less than 0.01, i.e., the SIC 4-digit industry was at least 100 times the apparent scope of the affected market.  Of the 80 indictments, 19 involved SIC 4-digit industries that had been thought to comport well with markets, so these were the most instructive. For  16 of the 19, the SIC 4-digit industry was at least 10 times the apparent scope of the affected market (i.e., the Commerce Quotient was less than 0.1).

Antitrust authorities do not rely on SIC 4-digit industry codes and instead establish a market definition based on the facts of each case. It is not possible to infer competitive effects from census data as Wollmann attempts to do.

The data cannot distinguish between anticompetitive mergers and procompetitive mergers

As Wollmann himself notes, the results tell us nothing about the relative costs and benefits of the new HSR policy:

Even so, these findings do not on their own advocate for one policy over another. To do so requires equating industry consolidation to a specific amount of economic harm and then comparing the resulting figure to the benefits derived from raising thresholds, which could be large. Even if the agencies ignore the reduced regulatory burden on firms, introducing exemptions can free up agency resources to pursue other cases (or reduce public spending). These and related issues require careful consideration but simply fall outside the scope of the present work.

For instance, firms could be reallocating merger activity to targets below the new threshold to avoid erroneous enforcement or they could be increasing merger activity for small targets due to reduced regulatory costs and uncertainty.

The study is likely underpowered for effects on blocked mergers

While the paper provides convincing evidence that investigations of newly-exempt mergers decreased dramatically following the change in the notification threshold, there is no equally convincing evidence of an effect on blocked mergers. As Wollmann points out, blocked mergers were exceedingly rare both before and after the Amendment (emphasis added):

Over 57,000 mergers comprise the sample, which spans eighteen years. The mean number of mergers each year is 3,180. The DOJ and FTC receive 31,464 notifications over this period, or 1,748 per year. Also, as stated above, blocked mergers are very infrequent: there are on average 13 per year pre-Amendment and 9 per-year post-Amendment.

Since blocked mergers are such a small percentage of total mergers both before and after the Amendment, we likely cannot tell from the data whether actual enforcement action changed significantly due to the change in notification threshold.

Greg Ip’s write-up for the WSJ includes some relevant charts for this issue. Ironically for a piece about the problems of lax merger review, the accompanying graphs show merger enforcement actions slightly increasing at both the FTC and the DOJ since 2001:

Source: WSJ

Overall, Wollmann’s paper does an effective job showing how changes in premerger notification rules can affect merger activity. However, due to data limitations, we cannot conclude anything about competitive effects or enforcement intensity from this study.

In an amicus brief filed last Friday, a diverse group of antitrust scholars joined the Washington Legal Foundation in urging the U.S. Court of Appeals for the Second Circuit to vacate the Federal Trade Commission’s misguided 1-800 Contacts decision. Reasoning that 1-800’s settlements of trademark disputes were “inherently suspect,” the FTC condemned the settlements under a cursory “quick look” analysis. In so doing, it improperly expanded the category of inherently suspect behavior and ignored an obvious procompetitive justification for the challenged settlements.  If allowed to stand, the Commission’s decision will impair intellectual property protections that foster innovation.

A number of 1-800’s rivals purchased online ad placements that would appear when customers searched for “1-800 Contacts.” 1-800 sued those rivals for trademark infringement, and the lawsuits settled. As part of each settlement, 1-800 and its rival agreed not to bid on each other’s trademarked terms in search-based keyword advertising. (For example, EZ Contacts could not bid on a placement tied to a search for 1-800 Contacts, and vice-versa). Each party also agreed to employ “negative keywords” to ensure that its ads would not appear in response to a consumer’s online search for the other party’s trademarks. (For example, in bidding on keywords, 1-800 would have to specify that its ad must not appear in response to a search for EZ Contacts, and vice-versa). Notably, the settlement agreements didn’t restrict the parties’ advertisements through other media such as TV, radio, print, or other forms of online advertising. Nor did they restrict paid search advertising in response to any search terms other than the parties’ trademarks.

The FTC concluded that these settlement agreements violated the antitrust laws as unreasonable restraints of trade. Although the agreements were not unreasonable per se, as naked price-fixing is, the Commission didn’t engage in the normally applicable rule of reason analysis to determine whether the settlements passed muster. Instead, the Commission condemned the settlements under the truncated analysis that applies when, in the words of the Supreme Court, “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” The Commission decided that no more than a quick look was required because the settlements “restrict the ability of lower cost online sellers to show their ads to consumers.”

That was a mistake. First, the restraints in 1-800’s settlements are far less extensive than other restraints that the Supreme Court has said may not be condemned under a cursory quick look analysis. In California Dental, for example, the Supreme Court reversed a Ninth Circuit decision that employed the quick look analysis to condemn a de facto ban on all price and “comfort” advertising by members of a dental association. In light of the possibility that the ban could reduce misleading ads, enhance customer trust, and thereby stimulate demand, the Court held that the restraint must be assessed under the more probing rule of reason. A narrow limit on the placement of search ads is far less restrictive than the all-out ban for which the California Dental Court prescribed full-on rule of reason review.

1-800’s settlements are also less likely to be anticompetitive than are other settlements that the Supreme Court has said must be evaluated under the rule of reason. The Court’s Actavis decision rejected quick look and mandated full rule of reason analysis for reverse payment settlements of pharmaceutical patent litigation. In a reverse payment settlement, the patent holder pays an alleged infringer to stay out of the market for some length of time. 1-800’s settlements, by contrast, did not exclude its rivals from the market, place any restrictions on the content of their advertising, or restrict the placement of their ads except on webpages responding to searches for 1-800’s own trademarks. If the restraints in California Dental and Actavis required rule of reason analysis, then those in 1-800’s settlements surely must as well.

In addition to disregarding Supreme Court precedents that limit when mere quick look is appropriate, the FTC gave short shrift to a key procompetitive benefit of the restrictions in 1-800’s settlements. 1-800 spent millions of dollars convincing people that they could save money by ordering prescribed contact lenses from a third party rather than buying them from prescribing optometrists. It essentially built the online contact lens market in which its rivals now compete. In the process, it created a strong trademark, which undoubtedly boosts its own sales. (Trademarks point buyers to a particular seller and enhance consumer confidence in the seller’s offering, since consumers know that branded sellers will not want to tarnish their brands with shoddy products or service.)

When a rival buys ad space tied to a search for 1-800 Contacts, that rival is taking a free ride on 1-800’s investments in its own brand and in the online contact lens market itself. A rival that has advertised less extensively than 1-800—primarily because 1-800 has taken the lead in convincing consumers to buy their contact lenses online—will incur lower marketing costs than 1-800 and may therefore be able to underprice it.  1-800 may thus find that it loses sales to rivals who are not more efficient than it is but have lower costs because they have relied on 1-800’s own efforts.

If market pioneers like 1-800 cannot stop this sort of free-riding, they will have less incentive to make the investments that create new markets and develop strong trade names. The restrictions in the 1-800 settlements were simply an effort to prevent inefficient free-riding while otherwise preserving the parties’ freedom to advertise. They were a narrowly tailored solution to a problem that hurt 1-800 and reduced incentives for future investments in market-developing activities that inure to the benefit of consumers.

Rule of reason analysis would have allowed the FTC to assess the full market effects of 1-800’s settlements. The Commission’s truncated assessment, which was inconsistent with Supreme Court decisions on when a quick look will suffice, condemned conduct that was likely procompetitive. The Second Circuit should vacate the FTC’s order.

The full amicus brief, primarily drafted by WLF’s Corbin Barthold and joined by Richard Epstein, Keith Hylton, Geoff Manne, Hal Singer, and me, is here.

This guest post is by Corbin K. Barthold, Litigation Counsel at Washington Legal Foundation.

Complexity need not follow size. A star is huge but mostly homogenous. “It’s core is so hot,” explains Martin Rees, “that no chemicals can exist (complex molecules get torn apart); it is basically an amorphous gas of atomic nuclei and electrons.”

Nor does complexity always arise from remoteness of space or time. Celestial gyrations can be readily grasped. Thales of Miletus probably predicted a solar eclipse. Newton certainly could have done so. And we’re confident that in 4.5 billion years the Andromeda galaxy will collide with our own.

If the simple can be seen in the large and the distant, equally can the complex be found in the small and the immediate. A double pendulum is chaotic. Likewise the local weather, the fluctuations of a wildlife population, or the dispersion of the milk you pour into your coffee.

Our economy is not like a planetary orbit. It’s more like the weather or the milk. No one knows which companies will become dominant, which products will become popular, or which industries will become defunct. No one can see far ahead. Investing is inherently risky because the future of the economy, or even a single segment of it, is intractably uncertain. Do not hand your savings to any expert who says otherwise. Experts, in fact, often see the least of all.

But if a broker with a “sure thing” stock is a mountebank, what does that make an antitrust scholar with an “optimum structure” for a market? 

Not a prophet.

There is so much that we don’t know. Consider, for example, the notion that market concentration is a good measure of market competitiveness. The idea seems intuitive enough, and in many corners it remains an article of faith.

But the markets where this assumption is most plausible—hospital care and air travel come to mind—are heavily shaped by that grand monopolist we call government. Only a large institution can cope with the regulatory burden placed on the healthcare industry. As Tyler Cowen writes, “We get the level of hospital concentration that we have in essence chosen through politics and the law.”

As for air travel: the government promotes concentration by barring foreign airlines from the domestic market. In any case, the state of air travel does not support a straightforward conclusion that concentration equals power. The price of flying has fallen almost continuously since passage of the Airline Deregulation Act in 1978. The major airlines are disciplined by fringe carriers such as JetBlue and Southwest.

It is by no means clear that, aside from cases of government-imposed concentration, a consolidated market is something to fear. Technology lowers costs, lower costs enable scale, and scale tends to promote efficiency. Scale can arise naturally, therefore, from the process of creating better and cheaper products.

Say you’re a nineteenth-century cow farmer, and the railroad reaches you. Your shipping costs go down, and you start to sell to a wider market. As your farm grows, you start to spread your capital expenses over more sales. Your prices drop. Then refrigerated rail cars come along, you start slaughtering your cows on site, and your shipping costs go down again. Your prices drop further. Farms that fail to keep pace with your cost-cutting go bust. The cycle continues until beef is cheap and yours is one of the few cow farms in the area. The market improves as it consolidates.

As the decades pass, this story repeats itself on successively larger stages. The relentless march of technology has enabled the best companies to compete for regional, then national, and now global market share. We should not be surprised to see ever fewer firms offering ever better products and services.

Bear in mind, moreover, that it’s rarely the same company driving each leap forward. As Geoffrey Manne and Alec Stapp recently noted in this space, markets are not linear. Just after you adopt the next big advance in the logistics of beef production, drone delivery will disrupt your delivery network, cultured meat will displace your product, or virtual-reality flavoring will destroy your industry. Or—most likely of all—you’ll be ambushed by something you can’t imagine.

Does market concentration inhibit innovation? It’s possible. “To this day,” write Joshua Wright and Judge Douglas Ginsburg, “the complex relationship between static product market competition and the incentive to innovate is not well understood.” 

There’s that word again: complex. When will thumping company A in an antitrust lawsuit increase the net amount of innovation coming from companies A, B, C, and D? Antitrust officials have no clue. They’re as benighted as anyone. These are the people who will squash Blockbuster’s bid to purchase a rival video-rental shop less than two years before Netflix launches a streaming service.

And it’s not as if our most innovative companies are using market concentration as an excuse to relax. If its only concern were maintaining Google’s grip on the market for internet-search advertising, Alphabet would not have spent $16 billion on research and development last year. It spent that much because its long-term survival depends on building the next big market—the one that does not exist yet.

No expert can reliably make the predictions necessary to say when or how a market should look different. And if we empowered some experts to make such predictions anyway, no other experts would be any good at predicting what the empowered experts would predict. Experts trying to give us “well structured” markets will instead give us a costly, politicized, and stochastic antitrust enforcement process. 

Here’s a modest proposal. Instead of using the antitrust laws to address the curse of bigness, let’s create the Office of the Double Pendulum. We can place the whole section in a single room at the Justice Department. 

All we’ll need is some ping-pong balls, a double pendulum, and a monkey. On each ball will be the name of a major corporation. Once a quarter—or a month; reasonable minds can differ—a ball will be drawn, and the monkey prodded into throwing the pendulum. An even number of twirls saves the company on the ball. An odd number dooms it to being broken up.

This system will punish success just as haphazardly as anything our brightest neo-Brandeisian scholars can devise, while avoiding the ruinously expensive lobbying, rent-seeking, and litigation that arise when scholars succeed in replacing the rule of law with the rule of experts.

All hail the chaos monkey. Unutterably complex. Ineffably simple.