Archives For technology

Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing  technology resources available to Congress are relatively modest. 

Although a new OTA is unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails. 

The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem. 

The OTA is a poor fit for the modern world

The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls —  have sought to revive the OTA. 

To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA. 

Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process. 

A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”

The upshot?

A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges

The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties. 

The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.

According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive[] to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.

The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports. 

A new OTA unnecessarily increases the surface area for regulatory capture

Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:

This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.

Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.

The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable. 

The real problem to solve is a lack of modern governance

Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents. 

Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.

Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws. 

The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.  

It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that

[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.

Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing. 

Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood: 

Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.

The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory  bases covered —  has little relevance.

The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.

*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.

This is the second in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here). It draws on research from a soon-to-be published ICLE white paper.

(Left, Android 10 Website; Right, iOS 13 Website)

In a previous post, I argued that the Commission failed to adequately define the relevant market in its recently published Google Android decision

This improper market definition might not be so problematic if the Commission had then proceeded to undertake a detailed (and balanced) assessment of the competitive conditions that existed in the markets where Google operates (including the competitive constraints imposed by Apple). 

Unfortunately, this was not the case. The following paragraphs respond to some of the Commission’s most problematic arguments regarding the existence of barriers to entry, and the absence of competitive constraints on Google’s behavior.

The overarching theme is that the Commission failed to quantify its findings and repeatedly drew conclusions that did not follow from the facts cited. As a result, it was wrong to conclude that Google faced little competitive pressure from Apple and other rivals.

1. Significant investments and network effects ≠ barriers to entry

In its decision, the Commission notably argued that significant investments (millions of euros) are required to set up a mobile OS and App store. It also argued that market for licensable mobile operating systems gave rise to network effects. 

But contrary to the Commission’s claims, neither of these two factors is, in and of itself, sufficient to establish the existence of barriers to entry (even under EU competition law’s loose definition of the term, rather than Stigler’s more technical definition)

Take the argument that significant investments are required to enter the mobile OS market.

The main problem is that virtually every market requires significant investments on the part of firms that seek to enter. Not all of these costs can be seen as barriers to entry, or the concept would lose all practical relevance. 

For example, purchasing a Boeing 737 Max airplane reportedly costs at least $74 million. Does this mean that incumbents in the airline industry are necessarily shielded from competition? Of course not. 

Instead, the relevant question is whether an entrant with a superior business model could access the capital required to purchase an airplane and challenge the industry’s incumbents.

Returning to the market for mobile OSs, the Commission should thus have questioned whether as-efficient rivals could find the funds required to produce a mobile OS. If the answer was yes, then the investments highlighted by the Commission were largely immaterial. As it happens, several firms have indeed produced competing OSs, including CyanogenMod, LineageOS and Tizen.

The same is true of Commission’s conclusion that network effects shielded Google from competitors. While network effects almost certainly play some role in the mobile OS and app store markets, it does not follow that they act as barriers to entry in competition law terms. 

As Paul Belleflamme recently argued, it is a myth that network effects can never be overcome. And as I have written elsewhere, the most important question is whether users could effectively coordinate their behavior and switch towards a superior platform, if one arose (See also Dan Spulber’s excellent article on this point).

The Commission completely ignored this critical interrogation during its discussion of network effects.

2. The failure of competitors is not proof of barriers to entry

Just as problematically, the Commission wrongly concluded that the failure of previous attempts to enter the market was proof of barriers to entry. 

This is the epitome of the Black Swan fallacy (i.e. inferring that all swans are white because you have never seen a relatively rare, but not irrelevant, black swan).

The failure of rivals is equally consistent with any number of propositions: 

  • There were indeed barriers to entry; 
  • Google’s products were extremely good (in ways that rivals and the Commission failed to grasp); 
  • Google responded to intense competitive pressure by continuously improving its product (and rivals thus chose to stay out of the market); 
  • Previous rivals were persistently inept (to take the words of Oliver Williamson); etc. 

The Commission did not demonstrate that its own inference was the right one, nor did it even demonstrate any awareness that other explanations were at least equally plausible.

3. First mover advantage?

Much of the same can be said about the Commission’s observation that Google enjoyed a first mover advantage

The elephant in the room is that Google was not the first mover in the smartphone market (and even less so in the mobile phone industry). The Commission attempted to sidestep this uncomfortable truth by arguing that Google was the first mover in the Android app store market. It then concluded that Google had an advantage because users were familiar with Android’s app store.

To call this reasoning “naive” would be too kind. Maybe consumers are familiar with Google’s products today, but they certainly weren’t when Google entered the market. 

Why would something that did not hinder Google (i.e. users’ lack of familiarity with its products, as opposed to those of incumbents such as Nokia or Blackberry) have the opposite effect on its future rivals? 

Moreover, even if rivals had to replicate Android’s user experience (and that of its app store) to prove successful, the Commission did not show that there was anything that prevented them from doing so — a particularly glaring omission given the open-source nature of the Android OS.

The result is that, at best, the Commission identified a correlation but not causality. Google may arguably have been the first, and users might have been more familiar with its offerings, but this still does not prove that Android flourished (and rivals failed) because of this.

4. It does not matter that users “do not take the OS into account” when they purchase a device

The Commission also concluded that alternatives to Android (notably Apple’s iOS and App Store) exercised insufficient competitive constraints on Google. Among other things, it argued that this was because users do not take the OS into account when they purchase a smartphone (so Google could allegedly degrade Android without fear of losing users to Apple)..

In doing so, the Commission failed to grasp that buyers might base their purchases on a devices’ OS without knowing it.

Some consumers will simply follow the advice of a friend, family member or buyer’s guide. Acutely aware of their own shortcomings, they thus rely on someone else who does take the phone’s OS into account. 

But even when they are acting independently, unsavvy consumers may still be driven by technical considerations. They might rely on a brand’s reputation for providing cutting edge devices (which, per the Commission, is the most important driver of purchase decisions), or on a device’s “feel” when they try it in a showroom. In both cases, consumers’ choices could indirectly be influenced by a phone’s OS.

In more technical terms, a phone’s hardware and software are complementary goods. In these settings, it is extremely difficult to attribute overall improvements to just one of the two complements. For instance, a powerful OS and chipset are both equally necessary to deliver a responsive phone. The fact that consumers may misattribute a device’s performance to one of these two complements says nothing about their underlying contribution to a strong end-product (which, in turn, drives purchase decisions). Likewise, battery life is reportedly one of the most important features for users, yet few realize that a phone’s OS has a large impact on it.

Finally, if consumers were really indifferent to the phone’s operating system, then the Commission should have dropped at least part of its case against Google. The Commission’s claim that Google’s anti-fragmentation agreements harmed consumers (by reducing OS competition) has no purchase if Android is provided free of charge and consumers are indifferent to non-price parameters, such as the quality of a phone’s OS. 

5. Google’s users were not “captured”

Finally, the Commission claimed that consumers are loyal to their smartphone brand and that competition for first time buyers was insufficient to constrain Google’s behavior against its “captured” installed base.

It notably found that 82% of Android users stick with Android when they change phones (compared to 78% for Apple), and that 75% of new smartphones are sold to existing users. 

The Commission asserted, without further evidence, that these numbers proved there was little competition between Android and iOS.

But is this really so? In almost all markets consumers likely exhibit at least some loyalty to their preferred brand. At what point does this become an obstacle to interbrand competition? The Commission offered no benchmark mark against which to assess its claims.

And although inter-industry comparisons of churn rates should be taken with a pinch of salt, it is worth noting that the Commission’s implied 18% churn rate for Android is nothing out of the ordinary (see, e.g., here, here, and here), including for industries that could not remotely be called anticompetitive.

To make matters worse, the Commission’s own claimed figures suggest that a large share of sales remained contestable (roughly 39%).

Imagine that, every year, 100 devices are sold in Europe (75 to existing users and 25 to new users, according to the Commission’s figures). Imagine further that the installed base of users is split 76–24 in favor of Android. Under the figures cited by the Commission, it follows that at least 39% of these sales are contestable.

According to the Commission’s figures, there would be 57 existing Android users (76% of 75) and 18 Apple users (24% of 75), of which roughly 10 (18%) and 4 (22%), respectively, switch brands in any given year. There would also be 25 new users who, even according to the Commission, do not display brand loyalty. The result is that out of 100 purchasers, 25 show no brand loyalty and 14 switch brands. And even this completely ignores the number of consumers who consider switching but choose not to after assessing the competitive options.

Conclusion

In short, the preceding paragraphs argue that the Commission did not meet the requisite burden of proof to establish Google’s dominance. Of course, it is one thing to show that the Commission’s reasoning was unsound (it is) and another to establish that its overall conclusion was wrong.

At the very least, I hope these paragraphs will convey a sense that the Commission loaded the dice, so to speak. Throughout the first half of its lengthy decision, it interpreted every piece of evidence against Google, drew significant inferences from benign pieces of information, and often resorted to circular reasoning.

The following post in this blog series argues that these errors also permeate the Commission’s analysis of Google’s allegedly anticompetitive behavior.

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.

“As usual,” Lord Macaulay wrote in his history of England, “many persons” were “disposed to clamour against the innovation, simply because it was an innovation.” They objected that the express rides would corrupt traditional horsemanship, throw saddlers and boatmen out of work, bankrupt the roadside taverns, and force travelers to sit with children and the disabled. “It was gravely recommended,” reported Macaulay, by various towns and companies, that “no public coach should be permitted to have more than four horses, to start oftener that once a week, or to go more than thirty miles a day.”

Macaulay used the episode to offer his contemporaries a warning. Although “we smile at these things,” he said, “our descendants, when they read the history of the opposition offered by cupidity and prejudice to the improvements of the nineteenth century, may smile in their turn.” Macaulay wanted the smart set to take a wider view of history.

They rarely do. It is not in their nature. As Schumpeter understood, the “intellectual group” cannot help attacking “the foundations of capitalist society.” “It lives on criticism and its whole position depends on criticism that stings.”

An aspiring intellectual would do well to avoid restraint or good cheer. Better to build on a foundation of panic and indignation. Want to sell books and appear on television? Announce the “death” of this or a “crisis” over that. Want to seem fashionable among other writers, artists, and academics? Denounce greed and rail against “the system.”

New technology is always a good target. When a lantern inventor obtained a patent to light London, observed Macaulay, “the cause of darkness was not left undefended.” The learned technophobes have been especially vexed lately. The largest tech companies, they protest, are manipulating us.

Facebook, The New Republic declares, “remade the internet in its hideous image.” The New Yorker wonders whether the platform is going to “break democracy.”

Apple is no better. “Have smartphones destroyed a generation?” asks The Atlantic in a cover-story headline. The article’s author, Jean Twenge, says smartphones have made the young less independent, more reclusive, and more depressed. She claims that today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis in decades.” “Much of this deterioration,” she contends, “can be traced to their phones.”

And then there’s Amazon. It’s too efficient. Alex Salkever worries in Fortune that “too many clicks, too much time spent, and too much money spent on Amazon” is “bad for our collective financial, psychological, and physical health.”

Here’s a rule of thumb for the refined cultural critic to ponder. When the talking points you use to convey your depth and perspicacity match those of a sermonizing Republican senator, start worrying that your pseudo-profound TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn fears of novelty and change.

Enter Josh Hawley, freshman GOP senator from Missouri. Hawley claims that Facebook is a “digital drug” that “dulls” attention spans and “frays” relationships. He speculates about whether social media is causing teenage girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists, is “ever more sophisticated exploitation of people.” He scolds the tech companies for failing to produce products that—in his judgment—“enrich lives” and “strengthen society.”

As for the stuff the industry does make, Hawley wants it changed. He has introduced a bill to ban infinite scrolling, music and video autoplay, and the use of “badges and other awards” (gamification) on social media. The bill also requires defaults that limit a user’s time on a platform to 30 minutes a day. A user could opt out of this restriction, but only for a month at a stretch.

The available evidence does not bear out the notion that highbrow magazines, let alone Josh Hawley, should redesign tech products and police how people use their time. You’d probably have to pay someone around $500 to stay off Facebook for a year. Getting her to forego using Amazon would cost even more. And Google is worth more still—perhaps thousands of dollars per user per year. These figures are of course quite rough, but that just proves the point: the consumer surplus created by the internet is inestimable.

Is technology making teenagers sad? Probably not. A recent study tracked the social-media use, along with the wellbeing, of around ten-thousand British children for almost a decade. “In more than half of the thousands of statistical models we tested,” the study’s authors write, “we found nothing more than random statistical noise.” Although there were some small links between teenage girls’ mood and their social-media use, the connections were “miniscule” and too “trivial” to “inform personal parenting decisions.” “It’s probably best,” the researchers conclude, “to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing.”

One could head the other way, in fact, and argue that technology is making children smarter. Surfing the web and playing video games might broaden their attention spans and improve their abstract thinking.

Is Facebook a threat to democracy? Not yet. The memes that Russian trolls distributed during the 2016 election were clumsy, garish, illiterate piffle. Most of it was the kind of thing that only an Alex Jones fan or a QAnon conspiracist would take seriously. And sure enough, one study finds that only a tiny fraction of voters, most of them older conservatives, read and spread the material. It appears, in other words, that the Russian fake news and propaganda just bounced around among a few wingnuts whose support for Donald Trump was never in doubt.

Over time, it is fair to say, the known costs and benefits of the latest technological innovations could change. New data and further study might reveal that the handwringers are on to something. But there’s good news: if you have fears, doubts, or objections, nothing stops you from acting on them. If you believe that Facebook’s behavior is intolerable, or that its impact on society is malign, stop using it. If you think Amazon is undermining small businesses, shop more at local stores. If you fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you suspect that everything has gone pear-shaped since the Industrial Revolution started, throw out your refrigerator and stop going to the dentist.

We now hit the crux of the intellectuals’ (and Josh Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim, weak, and base. You lack the wits to use an iPhone on your own terms. You lack the self-control to post, “like”, and share in moderation (or the discipline to make your children follow suit). You lack the virtue to abstain from the pleasures of Prime-membership consumerism.

One AI researcher digs to the root. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or ‘I’m not on social media,’” she tells Vox. No one wields the “privilege” epithet quite like the modern privileged do. It is one of the remarkable features of our time. Pundits and professors use the word to announce, albeit unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection from too much information, too much choice, too much freedom.

There’s nothing crazy about wanting the new aristocrats of the mind to shepherd everyone else. Noblesse oblige is a venerable concept. The lords care for the peasants, the king cares for the lords, God cares for the king. But that is not our arrangement. Our forebears embraced the Enlightenment. They began with the assumption that citizens are autonomous. They got suspicious whenever the holders of political power started trying to tell those citizens what they can and cannot do.

Algorithms might one day expose, and play on, our innate lack of free will so much that serious legal and societal adjustments are needed. That, however, is a remote and hypothetical issue, one likely to fall on a generation, yet unborn, who will smile in their turn at our qualms. (Before you place much weight on more dramatic predictions, consider that the great Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)

The question today is more mundane: do voters crave moral direction from their betters? Are they clamoring to be viewed as lowly creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly capable of debasing themselves accordingly through their choice of political representatives. Judging from Congress’s flat response to Hawley’s bill, the electorate is not quite there yet.

In the meantime, the great and the good might reevaluate their campaign to infantilize their less fortunate brothers and sisters. Lecturing people about how helpless they are is not deep. It’s not cool. It’s condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned and priggish.

In 1816 The Times of London warned “every parent against exposing his daughter to so fatal a contagion” as . . . the waltz. “The novelty is one deserving of severe reprobation,” Britain’s paper of record intoned, “and we trust it will never again be tolerated in any moral English society.”

There was a time, Lord Macaulay felt sure, when some brahmin or other looked down his nose at the plough and the alphabet.

In March of this year, Elizabeth Warren announced her proposal to break up Big Tech in a blog post on Medium. She tried to paint the tech giants as dominant players crushing their smaller competitors and strangling the open internet. This line in particular stood out: “More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook.

This statistic immediately struck me as outlandish, but I knew I would need to do some digging to fact check it. After seeing the claim repeated in a recent profile of the Open Markets Institute — “Google and Facebook control websites that receive 70 percent of all internet traffic” — I decided to track down the original source for this surprising finding. 

Warren’s blog post links to a November 2017 Newsweek article — “Who Controls the Internet? Facebook and Google Dominance Could Cause the ‘Death of the Web’” — written by Anthony Cuthbertson. The piece is even more alarmist than Warren’s blog post: “Facebook and Google now have direct influence over nearly three quarters of all internet traffic, prompting warnings that the end of a free and open web is imminent.

The Newsweek article, in turn, cites an October 2017 blog post by André Staltz, an open source freelancer, on his personal website titled “The Web began dying in 2014, here’s how”. His takeaway is equally dire: “It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.” Staltz claims the blog post took “months of research to write”, but the headline statistic is merely aggregated from a December 2015 blog post by Parse.ly, a web analytics and content optimization software company.

Source: André Staltz

The Parse.ly article — “Facebook Continues to Beat Google in Sending Traffic to Top Publishers” — is about external referrals (i.e., outside links) to publisher sites (not total internet traffic) and says the “data set used for this study included around 400 publisher domains.” This is not even a random sample much less a comprehensive measure of total internet traffic. Here’s how they summarize their results: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” 

Source: Parse.ly

So, using the sources provided by the respective authors, the claim from Elizabeth Warren that “more than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook” can be more accurately rewritten as “more than 70 percent of external links to 400 publishers come from sites owned or operated by Google and Facebook.” When framed that way, it’s much less conclusive (and much less scary).

But what’s the real statistic for total internet traffic? This is a surprisingly difficult question to answer, because there is no single way to measure it: Are we talking about share of users, or user-minutes, of bits, or total visits, or unique visits, or referrals? According to Wikipedia, “Common measurements of traffic are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.”

One of the more comprehensive efforts to answer this question is undertaken annually by Sandvine. The networking equipment company uses its vast installed footprint of equipment across the internet to generate statistics on connections, upstream traffic, downstream traffic, and total internet traffic (summarized in the table below). This dataset covers both browser-based and app-based internet traffic, which is crucial for capturing the full picture of internet user behavior.

Source: Sandvine

Looking at two categories of traffic analyzed by Sandvine — downstream traffic and overall traffic — gives lie to the narrative pushed by Warren and others. As you can see in the chart below, HTTP media streaming — a category for smaller streaming services that Sandvine has not yet tracked individually — represented 12.8% of global downstream traffic and Netflix accounted for 12.6%. According to Sandvine, “the aggregate volume of the long tail is actually greater than the largest of the short-tail providers.” So much for the open internet being smothered by the tech giants.

Source: Sandvine

As for Google and Facebook? The report found that Google-operated sites receive 12.00 percent of total internet traffic while Facebook-controlled sites receive 7.79 percent. In other words, less than 20 percent of all Internet traffic goes through sites owned or operated by Google or Facebook. While this statistic may be less eye-popping than the one trumpeted by Warren and other antitrust activists, it does have the virtue of being true.

Source: Sandvine

The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.

According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.

COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.

COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old). 

As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube. 

YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels. 

Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?

COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.

Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.

When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.

On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:

[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….

At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:

This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.

This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children. 

Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...

It might surprise some readers to learn that we think the Court’s decision today in Apple v. Pepper reaches — superficially — the correct result. But, we hasten to add, the Court’s reasoning (and, for that matter, the dissent’s) is completely wrongheaded. It would be an understatement to say that the Court reached the right result for the wrong reason; in fact, the Court’s analysis wasn’t even in the same universe as the correct reasoning.

Below we lay out our assessment, in a post drawn from an article forthcoming in the Nebraska Law Review.

Did the Court forget that, just last year, it decided Amex, the most significant U.S. antitrust case in ages?

What is most remarkable about the decision (and the dissent) is that neither mentions Ohio v. Amex, nor even the two-sided market context in which the transactions at issue take place.

If the decision in Apple v. Pepper hewed to the precedent established by Ohio v. Amex it would start with the observation that the relevant market analysis for the provision of app services is an integrated one, in which the overall effect of Apple’s conduct on both app users and app developers must be evaluated. A crucial implication of the Amex decision is that participants on both sides of a transactional platform are part of the same relevant market, and the terms of their relationship to the platform are inextricably intertwined.

Under this conception of the market, it’s difficult to maintain that either side does not have standing to sue the platform for the terms of its overall pricing structure, whether the specific terms at issue apply directly to that side or not. Both end users and app developers are “direct” purchasers from Apple — of different products, but in a single, inextricably interrelated market. Both groups should have standing.

More controversially, the logic of Amex also dictates that both groups should be able to establish antitrust injury — harm to competition — by showing harm to either group, as long as it establishes the requisite interrelatedness of the two sides of the market.

We believe that the Court was correct to decide in Amex that effects falling on the “other” side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis.

As a result, the Court’s holding in Amex should also have required a finding in Apple v. Pepper that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.

Relative to a strict construction of the traditional baseline, the former entails imposing an additional burden on two-sided market plaintiffs, while the latter entails a lessening of that burden. Whether the net effect is more or fewer successful cases in two-sided markets is unclear, of course. But from the perspective of aligning evidentiary and substantive doctrine with economic reality such an approach would be a clear improvement.

Critics accuse the Court of making antitrust cases unwinnable against two-sided market platforms thanks to Amex’s requirement that a prima facie showing of anticompetitive effect requires assessment of the effects on both sides of a two-sided market and proof of a net anticompetitive outcome. The critics should have been chastened by a proper decision in Apple v. Pepper. As it is, the holding (although not the reasoning) still may serve to undermine their fears.

But critics should have recognized that a necessary corollary of Amex’s “expanded” market definition is that, relative to previous standing doctrine, a greater number of prospective parties should have standing to sue.

More important, the Court in Apple v. Pepper should have recognized this. Although nominally limited to the indirect purchaser doctrine, the case presented the Court with an opportunity to grapple with this logical implication of its Amex decision. It failed to do so.

On the merits, it looks like Apple should win. But, for much the same reason, the Respondents in Apple v. Pepper should have standing

This does not, of course, mean that either party should win on the merits. Indeed, on the merits of the case, the Petitioner in Apple v. Pepper appears to have the stronger argument, particularly in light of Amex which (assuming the App Store is construed as some species of a two-sided “transaction” market) directs that Respondent has the burden of considering harms and efficiencies across both sides of the market.

At least on the basis of the limited facts as presented in the case thus far, Respondents have not remotely met their burden of proving anticompetitive effects in the relevant market.

The actual question presented in Apple v. Pepper concerns standing, not whether the plaintiffs have made out a viable case on the merits. Thus it may seem premature to consider aspects of the latter in addressing the former. But the structure of the market considered by the court should be consistent throughout its analysis.

Adjustments to standing in the context of two-sided markets must be made in concert with the nature of the substantive rule of reason analysis that will be performed in a case. The two doctrines are connected not only by the just demands for consistency, but by the error-cost framework of the overall analysis, which runs throughout the stages of an antitrust case.

Here, the two-sided markets approach in Amex properly understands that conduct by a platform has relevant effects on both sides of its interrelated two-sided market. But that stems from the actual economics of the platform; it is not merely a function of a judicial construct. It thus holds true at all stages of the analysis.

The implication for standing is that users on both sides of a two-sided platform may suffer similarly direct (or indirect) injury as a result of the platform’s conduct, regardless of the side to which that conduct is nominally addressed.

The consequence, then, of Amex’s understanding of the market is that more potential plaintiffs — specifically, plaintiffs on both sides of a two-sided market — may claim to suffer antitrust injury.

Why the myopic focus of the holding (and dissent) on Illinois Brick is improper: It’s about the market definition, stupid!

Moreover, because of the Amex understanding, the problem of analyzing the pass-through of damages at issue in Illinois Brick (with which the Court entirely occupies itself in Apple v. Pepper) is either mitigated or inevitable.

In other words, either the users on the different sides of a two-sided market suffer direct injury without pass-through under a proper definition of the relevant market, or else their interrelatedness is so strong that, complicated as it may be, the needs of substantive accuracy trump the administrative costs in sorting out the incidence of the costs, and courts cannot avoid them.

Illinois Brick’s indirect purchaser doctrine was designed for an environment in which the relationship between producers and consumers is mediated by a distributor in a direct, linear supply chain; it was not designed for platforms. Although the question presented in Apple v. Pepper is explicitly about whether the Illinois Brick “indirect purchaser” doctrine applies to the Apple App Store, that determination is contingent on the underlying product market definition (whether the product market is in fact well-specified by the parties and the court or not).

Particularly where intermediaries exist precisely to address transaction costs between “producers” and “consumers,” the platform services they provide may be central to the underlying claim in a way that the traditional direct/indirect filters — and their implied relevant markets — miss.

Further, the Illinois Brick doctrine was itself based not on the substantive necessity of cutting off liability evaluations at a particular level of distribution, but on administrability concerns. In particular, the Court was concerned with preventing duplicative recovery when there were many potential groups of plaintiffs, as well as preventing injustices that would occur if unknown groups of plaintiffs inadvertently failed to have their rights adequately adjudicated in absentia. It was also concerned with avoiding needlessly complicated damages calculations.

But, almost by definition, the tightly coupled nature of the two sides of a two-sided platform should mitigate the concerns about duplicative recovery and unknown parties. Moreover, much of the presumed complexity in damages calculations in a platform setting arise from the nature of the platform itself. Assessing and apportioning damages may be complicated, but such is the nature of complex commercial relationships — the same would be true, for example, of damages calculations between vertically integrated companies that transact simultaneously at multiple levels, or between cross-licensing patent holders/implementers. In fact, if anything, the judicial efficiency concerns in Illinois Brick point toward the increased importance of properly assessing the nature of the product or service of the platform in order to ensure that it accurately encompasses the entire relevant transaction.

Put differently, under a proper, more-accurate market definition, the “direct” and “indirect” labels don’t necessarily reflect either business or antitrust realities.

Where the Court in Apple v. Pepper really misses the boat is in its overly formalistic claim that the business model (and thus the product) underlying the complained-of conduct doesn’t matter:

[W]e fail to see why the form of the upstream arrangement between the manufacturer or supplier and the retailer should determine whether a monopolistic retailer can be sued by a downstream consumer who has purchased a good or service directly from the retailer and has paid a higher-than-competitive price because of the retailer’s unlawful monopolistic conduct.

But Amex held virtually the opposite:

Because “[l]egal presumptions that rest on formalistic distinctions rather than actual market realities are generally disfavored in antitrust law,” courts usually cannot properly apply the rule of reason without an accurate definition of the relevant market.

* * *

Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market.

In the face of novel business conduct, novel business models, and novel economic circumstances, the degree of substantive certainty may be eroded, as may the reasonableness of the expectation that typical evidentiary burdens accurately reflect competitive harm. Modern technology — and particularly the platform business model endemic to many modern technology firms — presents a need for courts to adjust their doctrines in the face of such novel issues, even if doing so adds additional complexity to the analysis.

The unlearned market-definition lesson of the Eighth Circuit’s Campos v. Ticketmaster dissent

The Eight Circuit’s Campos v. Ticketmaster case demonstrates the way market definition shapes the application of the indirect purchaser doctrine. Indeed, the dissent in that case looms large in the Ninth Circuit’s decision in Apple v. Pepper. [Full disclosure: One of us (Geoff) worked on the dissent in Campos v. Ticketmaster as a clerk to Eighth Circuit judge Morris S. Arnold]

In Ticketmaster, the plaintiffs alleged that Ticketmaster abused its monopoly in ticket distribution services to force supracompetitve charges on concert venues — a practice that led to anticompetitive prices for concert tickets. Although not prosecuted as a two-sided market, the business model is strikingly similar to the App Store model, with Ticketmaster charging fees to venues and then facilitating ticket purchases between venues and concert goers.

As the dissent noted, however:

The monopoly product at issue in this case is ticket distribution services, not tickets.

Ticketmaster supplies the product directly to concert-goers; it does not supply it first to venue operators who in turn supply it to concert-goers. It is immaterial that Ticketmaster would not be supplying the service but for its antecedent agreement with the venues.

But it is quite relevant that the antecedent agreement was not one in which the venues bought some product from Ticketmaster in order to resell it to concert-goers.

More important, and more telling, is the fact that the entirety of the monopoly overcharge, if any, is borne by concert-goers.

In contrast to the situations described in Illinois Brick and the literature that the court cites, the venues do not pay the alleged monopoly overcharge — in fact, they receive a portion of that overcharge from Ticketmaster. (Emphasis added).

Thus, if there was a monopoly overcharge it was really borne entirely by concert-goers. As a result, apportionment — the complexity of which gives rise to the standard in Illinois Brick — was not a significant issue. And the antecedent transaction that allegedly put concertgoers in an indirect relationship with Ticketmaster is one in which Ticketmaster and concert venues divvied up the alleged monopoly spoils, not one in which the venues absorb their share of the monopoly overcharge.

The analogy to Apple v. Pepper is nearly perfect. Apple sits between developers on one side and consumers on the other, charges a fee to developers for app distribution services, and facilitates app sales between developers and users. It is possible to try to twist the market definition exercise to construe the separate contracts between developers and Apple on one hand, and the developers and consumers on the other, as some sort of complicated version of the classical manufacturing and distribution chains. But, more likely, it is advisable to actually inquire into the relevant factual differences that underpin Apple’s business model and adapt how courts consider market definition for two-sided platforms.

Indeed, Hanover Shoe and Illinois Brick were born out of a particular business reality in which businesses structured themselves in what are now classical production and distribution chains. The Supreme Court adopted the indirect purchaser rule as a prudential limitation on antitrust law in order to optimize the judicial oversight of such cases. It seems strangely nostalgic to reflexively try to fit new business methods into old legal analyses, when prudence and reality dictate otherwise.

The dissent in Ticketmaster was ahead of its time insofar as it recognized that the majority’s formal description of the ticket market was an artifact of viewing what was actually something much more like a ticket-services platform operated by Ticketmaster through the poor lens of the categories established decades earlier.

The Ticketmaster dissent’s observations demonstrate that market definition and antitrust standing are interrelated. It makes no sense to adhere to a restrictive reading of the latter if it connotes an economically improper understanding of the former. Ticketmaster provided an intermediary service — perhaps not quite a two-sided market, but something close — that stands outside a traditional manufacturing supply chain. Had it been offered by the venues themselves and bundled into the price of concert tickets there would be no question of injury and of standing (nor would market definition matter much, as both tickets and distribution services would be offered as a joint product by the same parties, in fixed proportions).

What antitrust standing doctrine should look like after Amex

There are some clear implications for antitrust doctrine that (should) follow from the preceding discussion.

A plaintiff has a choice to allege that a defendant operates either as a two-sided market or in a more traditional, linear chain during the pleading stage. If the plaintiff alleges a two-sided market, then, to demonstrate standing, it need only be shown that injury occurred to some subset of platform users with which the plaintiff is inextricably interrelated. The plaintiff would not need to demonstrate injury to him or herself, nor allege net harm, nor show directness.

In response, a defendant can contest standing by challenging the interrelatedness of the plaintiff and the group of platform users with whom the plaintiff claims interrelatedness. If the defendant does not challenge the allegation that it operates a two-sided market, it could not challenge standing by showing indirectness, that plaintiff had not alleged personal injury, or that plaintiff hasn’t alleged a net harm.

Once past a determination of standing, however, a plaintiff who pleads a two-sided market would not be able to later withdraw this allegation in order to lessen the attendant legal burdens.

If the court accepts that the defendant is operating a two-sided market, both parties would be required to frame their allegations and defenses in accordance with the nature of the two-sided market and thus the holding in Amex. This is critical because, whereas alleging a two-sided market may make it easier for plaintiffs to demonstrate standing, Amex’s requirement that net harm be demonstrated across interrelated sets of users makes it more difficult for plaintiffs to present a viable prima facie case. Further, defendants would not be barred from presenting efficiencies defenses based on benefits that interrelated users enjoy.

Conclusion: The Court in Apple v. Pepper should have acknowledged the implications of its holding in Amex

After Amex, claims against two-sided platforms might require more evidence to establish anticompetitive harm, but that business model also means that firms should open themselves up to a larger pool of potential plaintiffs. The legal principles still apply, but the relative importance of those principles to judicial outcomes shifts (or should shift) in line with the unique economic position of potential plaintiffs and defendants in a platform environment.

Whether a priori the net result is more or fewer cases and more or fewer victories for plaintiffs is not the issue; what matters is matching the legal and economic theory to the relevant facts in play. Moreover, decrying Amex as the end of antitrust was premature: the actual affect on injured parties can’t be known until other changes (like standing for a greater number of plaintiffs) are factored into the analysis. The Court’s holding in Apple v. Pepper sidesteps this issue entirely, and thus fails to properly move antitrust doctrine forward in line with its holding in Amex.

Of course, it’s entirely possible that platforms and courts might be inundated with expensive and difficult to manage lawsuits. There may be reasons of administrability for limiting standing (as Illinois Brick perhaps prematurely did for fear of the costs of courts’ managing suits). But then that should have been the focus of the Court’s decision.

Allowing standing in Apple v. Pepper permits exactly the kind of legal experimentation needed to enable the evolution of antitrust doctrine along with new business realities. But in some ways the Court reached the worst possible outcome. It announced a rule that permits more plaintiffs to establish standing, but it did not direct lower courts to assess standing within the proper analytical frame. Instead, it just expands standing in a manner unmoored from the economic — and, indeed, judicial — context. That’s not a recipe for the successful evolution of antitrust doctrine.

One of the main concerns I had during the IANA transition was the extent to which the newly independent organization would be able to behave impartially, implementing its own policies and bylaws in an objective and non-discriminatory manner, and not be unduly influenced by specific  “stakeholders”. Chief among my concerns at the time was the extent to which an independent ICANN would be able to resist the influence of governments: when a powerful government leaned on ICANN’s board, would it be able to adhere to its own policies and follow the process the larger multistakeholder community put in place?

It seems my concern was not unfounded. Amazon, Inc. has been in a long running struggle with the countries of the Amazonian Basin in South America over the use of the generic top-level domain (gTLD) .amazon. In 2014, the ICANN board (which was still nominally under the control of the US’s NTIA) uncritically accepted the nonbinding advice of the Government Advisory Committee (“GAC”) and denied Amazon Inc.’s application for .amazon. In 2017, an Independent Review Process panel reversed the board decision, because

[the board] failed in its duty to explain and give adequate reasons for its decision, beyond merely citing to its reliance on the GAC advice and the presumption, albeit a strong presumption, that it was based on valid and legitimate public policy concerns.  

Accordingly the board was directed to reconsider the .amazon petition and

make an objective and independent judgment regarding whether there are, in fact, well-founded, merits-based public policy reasons for denying Amazon’s applications.

In the two years since that decision, a number of proposals were discussed between Amazon Inc. and the Amazonian countries as they sought to reach a mutually agreeable resolution to the dispute, none of which were successful. In March of this year, the board acknowledged the failed negotiations and announced that the parties had four more weeks to try again and if no agreement were reached in that time, permitted Amazon Inc. to submit a proposal that would handle the Amazonian countries’ cultural protection concerns.

Predictably, that time elapsed and Amazon, Inc. submitted its proposal, which includes a public interest commitment that would allow the Amazonian countries access to certain second level domains under .amazon for cultural and noncommercial use. For example, Brazil could use a domain such as www.br.amazon to showcase the culturally relevant features of the portion of the Amazonian river that flows through its borders.

Prime facie, this seems like a reasonable way to ensure that the cultural interests of those living in the Amazonian region are adequately protected. Moreover, in its first stated objection to Amazon, Inc. having control of the gTLD, the GAC indicated that this was its concern:

[g]ranting exclusive rights to this specific gTLD to a private company would prevent the use of  this domain for purposes of public interest related to the protection, promotion and awareness raising on issues related to the Amazon biome. It would also hinder the possibility of use of this domain to congregate web pages related to the population inhabiting that geographical region.

Yet Amazon, Inc.’s proposal to protect just these interests was rejected by the Amazonian countries’ governments. The counteroffer from those governments was that they be permitted to co-own and administer the gTLD, that their governance interest be constituted in a steering committee on which Amazon, Inc. be given only a 1/9th vote, that they be permitted a much broader use of the gTLD generally and, judging by the conspicuous lack of language limiting use to noncommercial purposes, that they have the ability to use the gTLD for commercial purposes.

This last point certainly must be a nonstarter. Amazon, Inc.’s use of .amazon is naturally going to be commercial in nature. If eight other “co-owners” were permitted a backdoor to using the ‘.amazon’ name in commerce, trademark dilution seems like a predictable, if not inevitable, result. Moreover, the entire point of allowing brand gTLDs is to help responsible brand managers ensure that consumers are receiving the goods and services they expect on the Internet. Commercial use by the Amazonian countries could easily lead to a situation where merchants selling goods of unknown quality are able to mislead consumers by free riding on Amazon, Inc.’s name recognition.

This is a big moment for Internet governance

Theoretically, the ICANN board could decide this matter as early as this week — but apparently it has opted to treat this meeting as merely an opportunity for more discussion. That the board would consider not following through on its statement in March that it would finally put this issue to rest is not an auspicious sign that the board intends to take its independence seriously.

An independent ICANN must be able to stand up to powerful special interests when it comes to following its own rules and bylaws. This is the very concern that most troubled me before the NTIA cut the organization loose. Introducing more delay suggests that the board lacks the courage of its convictions. The Amazonian countries may end up irritated with the ICANN board, but ICANN is either an independent organization or its not.

Amazon, Inc. followed the prescribed procedures from the beginning; there is simply no good reason to draw out this process any further. The real fear here, I suspect, is that the board knows that this is a straightforward trademark case and is holding out hope that the Amazonian countries will make the necessary concessions that will satisfy Amazon, Inc. After seven years of this process, somehow I suspect that this is not likely and the board simply needs to make a decision on the proposals as submitted.

The truth is that these countries never even applied for use of the gTLD in the first place; they only became interested in the use of the domain once Amazon, Inc. expressed interest. All along, these countries maintained that they merely wanted to protect the cultural heritage of the region — surely a fine goal. Yet, when pressed to the edge of the timeline on the process, they produce a proposal that would theoretically permit them to operate commercial domains.

This is a test for ICANN’s board. If it doesn’t want to risk offending powerful parties, it shouldn’t open up the DNS to gTLDs because, inevitably, there will exist aggrieved parties that cannot be satisfied. Amazon, Inc. has submitted a solid proposal that allows it to protect both its own valid trademark interests in its brand as well as the cultural interests of the Amazonian countries. The board should vote on the proposal this week and stop delaying this process any further.