Archives For

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]

The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions. 

Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:

We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.

Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.

Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):

  1. Consumers ration how much they really need;
  2. Producers respond to the rising prices by ramping up supply and distributors make more available; and
  3. Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain. 

Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring. 

Rationing by consumers

During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages. 

Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods. 

If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.

According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages. 

For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)

The role of “price gouging” laws and social norms

A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.

But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.

Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution. 

Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.

Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites

But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain. 

But without a willingness to allow retailers and producers to use the informational signal of higher prices, there will continue to be more extreme shortages as consumers rush to stockpile underpriced resources.

The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.

Increased production and distribution

During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount. 

Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product  (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.

The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies

Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.

For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.

These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis. 

Entrepreneurs innovate around bottlenecks 

But the most interesting thing that occurs when prices rise is that entrepreneurs create new substitutes for in-demand products. For instance, distillers have started creating their own hand sanitizers

Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.

Another example is car companies in the US are now producing ventilators. The FDA waived regulations on the production of new ventilators after General Motors, Ford, and Tesla announced they would be willing to use idle production capacity for the production of ventilators.

As consumers demand more toilet paper, bottled water, and staple foods than can be produced quickly, entrepreneurs respond by refocusing current capabilities on these goods. Examples abound:

Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use. 


While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets. 

If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals. 

Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher. 

Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.  

Since the LabMD decision, in which the Eleventh Circuit Court of Appeals told the FTC that its orders were unconstitutionally vague, the FTC has been put on notice that it needs to reconsider how it develops and substantiates its claims in data security enforcement actions brought under Section 5. 

Thus, on January 6, the FTC announced on its blog that it will have “New and improved FTC data security orders: Better guidance for companies, better protection for consumers.” However, the changes the Commission highlights only get to a small part of what we have previously criticized when it comes to their “common law” of data security (see here and here). 

While the new orders do list more specific requirements to help explain what the FTC believes is a “comprehensive data security program”, there is still no legal analysis in either the orders or the complaints that would give companies fair notice of what the law requires. Furthermore, nothing about the underlying FTC process has changed, which means there is still enormous pressure for companies to settle rather than litigate the contours of what “reasonable” data security practices look like. Thus, despite the Commission’s optimism, the recent orders and complaints do little to nothing to remedy the problems that plague the Commission’s data security enforcement program.

The changes

In his blog post, the director of the Bureau of Consumer Protection at the FTC describes how new orders in data security enforcement actions are more specific, with one of the main goals being more guidance to businesses trying to follow the law.

Since the early 2000s, our data security orders had contained fairly standard language. For example, these orders typically required a company to implement a comprehensive information security program subject to a biennial outside assessment. As part of the FTC’s Hearings on Competition and Consumer Protection in the 21st Century, we held a hearing in December 2018 that specifically considered how we might improve our data security orders. We were also mindful of the 11th Circuit’s 2018 LabMD decision, which struck down an FTC data security order as unenforceably vague.

Based on this learning, in 2019 the FTC made significant improvements to its data security orders. These improvements are reflected in seven orders announced this year against an array of diverse companies: ClixSense (pay-to-click survey company), i-Dressup (online games for kids), DealerBuilt (car dealer software provider), D-Link (Internet-connected routers and cameras), Equifax (credit bureau), Retina-X (monitoring app), and Infotrax (service provider for multilevel marketers)…

[T]he orders are more specific. They continue to require that the company implement a comprehensive, process-based data security program, and they require the company to implement specific safeguards to address the problems alleged in the complaint. Examples have included yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption. These requirements not only make the FTC’s expectations clearer to companies, but also improve order enforceability.

Why the FTC’s data security enforcement regime fails to provide fair notice or develop law (and is not like the common law)

While these changes are long overdue, it is just one step in the direction of a much-needed process reform at the FTC in how it prosecutes cases with its unfairness authority, particularly in the realm of data security. It’s helpful to understand exactly why the historical failures of the FTC process are problematic in order to understand why the changes it is undertaking are insufficient.

For instance, Geoffrey Manne and I previously highlighted  the various ways the FTC’s data security consent order regime fails in comparison with the common law: 

In Lord Mansfield’s characterization, “the common law ‘does not consist of particular cases, but of general principles, which are illustrated and explained by those cases.’” Further, the common law is evolutionary in nature, with the outcome of each particular case depending substantially on the precedent laid down in previous cases. The common law thus emerges through the accretion of marginal glosses on general rules, dictated by new circumstances. 

The common law arguably leads to legal rules with at least two substantial benefits—efficiency and predictability or certainty. The repeated adjudication of inefficient or otherwise suboptimal rules results in a system that generally offers marginal improvements to the law. The incentives of parties bringing cases generally means “hard cases,” and thus judicial decisions that have to define both what facts and circumstances violate the law and what facts and circumstances don’t. Thus, a benefit of a “real” common law evolution is that it produces a body of law and analysis that actors can use to determine what conduct they can undertake without risk of liability and what they cannot. 

In the abstract, of course, the FTC’s data security process is neither evolutionary in nature nor does it produce such well-defined rules. Rather, it is a succession of wholly independent cases, without any precedent, narrow in scope, and binding only on the parties to each particular case. Moreover it is generally devoid of analysis of the causal link between conduct and liability and entirely devoid of analysis of which facts do not lead to liability. Like all regulation it tends to be static; the FTC is, after all, an enforcement agency, charged with enforcing the strictures of specific and little-changing pieces of legislation and regulation. For better or worse, much of the FTC’s data security adjudication adheres unerringly to the terms of the regulations it enforces with vanishingly little in the way of gloss or evolution. As such (and, we believe, for worse), the FTC’s process in data security cases tends to reject the ever-evolving “local knowledge” of individual actors and substitutes instead the inherently limited legislative and regulatory pronouncements of the past. 

By contrast, real common law, as a result of its case-by-case, bottom-up process, adapts to changing attributes of society over time, largely absent the knowledge and rent-seeking problems of legislatures or administrative agencies. The mechanism of constant litigation of inefficient rules allows the common law to retain a generally efficient character unmatched by legislation, regulation, or even administrative enforcement. 

Because the common law process depends on the issues selected for litigation and the effects of the decisions resulting from that litigation, both the process by which disputes come to the decision-makers’ attention, as well as (to a lesser extent, because errors will be corrected over time) the incentives and ability of the decision-maker to render welfare-enhancing decisions, determine the value of the common law process. These are decidedly problematic at the FTC.

In our analysis, we found the FTC’s process to be wanting compared to the institution of the common law. The incentives of the administrative complaint process put a relatively larger pressure on companies to settle data security actions brought by the FTC compared to private litigants. This is because the FTC can use its investigatory powers as a public enforcer to bypass the normal discovery process to which private litigants are subject, and over which independent judges have authority. 

In a private court action, plaintiffs can’t engage in discovery unless their complaint survives a motion to dismiss from the defendant. Discovery costs remain a major driver of settlements, so this important judicial review is necessary to make sure there is actually a harm present before putting those costs on defendants. 

Furthermore, the FTC can also bring cases in a Part III adjudicatory process which starts in front of an administrative law judge (ALJ) but is then appealable to the FTC itself. Former Commissioner Joshua Wright noted in 2013 that “in the past nearly twenty years… after the administrative decision was appealed to the Commission, the Commission ruled in favor of FTC staff. In other words, in 100 percent of cases where the ALJ ruled in favor of the FTC, the Commission affirmed; and in 100 percent of the cases in which the ALJ ruled against the FTC, the Commission reversed.” In other words, the FTC nearly always rules in favor of itself on appeal if the ALJ finds there is no case, as it did in LabMD. The combination of investigation costs before any complaint at all and the high likelihood of losing through several stages of litigation makes the intelligent business decision to simply agree to a consent decree.

The results of this asymmetrical process show the FTC has not really been building a common law. In all but two cases (Wyndham and LabMD), the companies who have been targeted for investigation by the FTC on data security enforcement have settled. We also noted how the FTC’s data security orders tended to be nearly identical from case-to-case, reflecting the standards of the FTC’s Safeguards Rule. Since the orders were giving nearly identical—and as LabMD found, vague—remedies in each case, it cannot be said there was a common law developing over time.  

What LabMD addressed and what it didn’t

In its decision, the Eleventh Circuit sidestepped fundamental substantive problems with the FTC’s data security practice (which we have made in both our scholarship and LabMD amicus brief) about notice or substantial injury. Instead, the court decided to assume the FTC had proven its case and focused exclusively on the remedy. 

We will assume arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data-security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.

What the Eleventh Circuit did address, though, was that the remedies the FTC had been routinely applying to businesses through its data enforcement actions lacked the necessary specificity in order to be enforceable through injunctions or cease and desist orders.

In the case at hand, the cease and desist order contains no prohibitions. It does not instruct LabMD to stop committing a specific act or practice. Rather, it commands LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness. This command is unenforceable. Its unenforceability is made clear if we imagine what would take place if the Commission sought the order’s enforcement…

The Commission moves the district court for an order requiring LabMD to show cause why it should not be held in contempt for violating the following injunctive provision:

[T]he respondent shall … establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers…. Such program… shall contain administrative, technical, and physical safeguards appropriate to respondent’s size and complexity, the nature and scope of respondent’s activities, and the sensitivity of the personal information collected from or about consumers….

The Commission’s motion alleges that LabMD’s program failed to implement “x” and is therefore not “reasonably designed.” The court concludes that the Commission’s alleged failure is within the provision’s language and orders LabMD to show cause why it should not be held in contempt.

At the show cause hearing, LabMD calls an expert who testifies that the data-security program LabMD implemented complies with the injunctive provision at issue. The expert testifies that “x” is not a necessary component of a reasonably designed data-security program. The Commission, in response, calls an expert who disagrees. At this point, the district court undertakes to determine which of the two equally qualified experts correctly read the injunctive provision. Nothing in the provision, however, indicates which expert is correct. The provision contains no mention of “x” and is devoid of any meaningful standard informing the court of what constitutes a “reasonably designed” data-security program. The court therefore has no choice but to conclude that the Commission has not proven — and indeed cannot prove — LabMD’s alleged violation by clear and convincing evidence.

In other words, the Eleventh Circuit found that an order requiring a reasonable data security program is not specific enough to make it enforceable. This leaves questions as to whether the FTC’s requirement of a “reasonable data security program” is specific enough to survive a motion to dismiss and/or a fair notice challenge going forward.

Under the Federal Rules of Civil Procedure, a plaintiff must provide “a short and plain statement . . . showing that the pleader is entitled to relief,” Fed. R. Civ. P. 8(a)(2), including “enough facts to state a claim . . . that is plausible on its face.” Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007). “[T]hreadbare recitals of the elements of a cause of action, supported by mere conclusory statements” will not suffice. Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009). In FTC v. D-Link, for instance, the Northern District of California dismissed the unfairness claims because the FTC did not sufficiently plead injury. 

[T]hey make out a mere possibility of injury at best. The FTC does not identify a single incident where a consumer’s financial, medical or other sensitive personal information has been accessed, exposed or misused in any way, or whose IP camera has been compromised by unauthorized parties, or who has suffered any harm or even simple annoyance and inconvenience from the alleged security flaws in the DLS devices. The absence of any concrete facts makes it just as possible that DLS’s devices are not likely to substantially harm consumers, and the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor. 

The fair notice question wasn’t reached in LabMD, though it was in FTC v. Wyndham. But the Third Circuit did not analyze the FTC’s data security regime under the “ascertainable certainty” standard applied to agency interpretation of a statute.

Wyndham’s position is unmistakable: the FTC has not yet declared that cybersecurity practices can be unfair; there is no relevant FTC rule, adjudication or document that merits deference; and the FTC is asking the federal courts to interpret § 45(a) in the first instance to decide whether it prohibits the alleged conduct here. The implication of this position is similarly clear: if the federal courts are to decide whether Wyndham’s conduct was unfair in the first instance under the statute without deferring to any FTC interpretation, then this case involves ordinary judicial interpretation of a civil statute, and the ascertainable certainty standard does not apply. The relevant question is not whether Wyndham had fair notice of the FTC’s interpretation of the statute, but whether Wyndham had fair notice of what the statute itself requires.

In other words, Wyndham boxed itself into a corner arguing that they did not have fair notice that the FTC could bring a data security enforcement action against the under Section 5 unfairness. LabMD, on the other hand, argued they did not have fair notice as to how the FTC would enforce its data security standards. Cf. ICLE-Techfreedom Amicus Brief at 19. The Third Circuit even suggested that under an “ascertainable certainty” standard, the FTC failed to provide fair notice: “we agree with Wyndham that the guidebook could not, on its own, provide ‘ascertainable certainty’ of the FTC’s interpretation of what specific cybersecurity practices fail § 45(n).” Wyndham, 799 F.3d at 256 n.21

Most importantly, the Eleventh Circuit did not actually get to the issue of whether LabMD actually violated the law under the factual record developed in the case. This means there is still no caselaw (aside from the ALJ decision in this case) which would allow a company to learn what is and what is not reasonable data security, or what counts as a substantial injury for the purposes of Section 5 unfairness in data security cases. 

How FTC’s changes fundamentally fail to address its failures of process

The FTC’s new approach to its orders is billed as directly responsive to what the Eleventh Circuit did reach in the LabMD decision, but it leaves so much of what makes the process insufficient in place.

First, it is notable that while the FTC highlights changes to its orders, there is still a lack of legal analysis in the orders that would allow a company to accurately predict whether its data security practices are enough under the law. A listing of what specific companies under consent orders are required to do is helpful. But these consent decrees do not require companies to admit liability or contain anything close to the reasoning that accompanies court opinions or normal agency guidance on complying with the law. 

For instance, the general formulation in these 2019 orders is that the company must “establish, implement, and maintain a comprehensive information/software security program that is designed to protect the security, confidentiality, and integrity of such personal information. To satisfy this requirement, Respondent/Defendant must, at a minimum…” (emphasis added), followed by a list of pretty similar requirements with variation depending on the business. Even if a company does all of the listed requirements but a breach occurs, the FTC is not obligated to find the data security program was legally sufficient. There is no safe harbor or presumptive reasonableness that attaches even for the business subject to the order, nonetheless companies looking for guidance. 

While the FTC does now require more specific things, like “yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption,” there is still no analysis on how to meet the standard of reasonableness the FTC relies upon. In other words, it is not clear that this new approach to orders does anything to increase fair notice to companies as to what the FTC requires under Section 5 unfairness.

Second, nothing about the underlying process has really changed. The FTC can still investigate and prosecute cases through administrative law courts with itself as initial court of appeal. This makes the FTC the police, prosecutor, and judge in its own case. In the case of LabMD, who actually won after many appeals, this process ended in bankruptcy. It is no surprise that since the LabMD decision, each of the FTC’s data security enforcement cases have been settled with consent orders, just as they were before the Eleventh Circuit opinion. 

Unfortunately, if the FTC really wants to evolve its data security process like the common law, it needs to engage in an actual common law process. Without caselaw on the facts necessary to establish substantial injury, “unreasonable” data security practices, and causation, there will continue to be more questions than answers about what the law requires. And without changes to the process, the FTC will continue to be able to strong-arm companies into consent decrees.

Every 5 years, Congress has to reauthorize the sunsetting provisions of the Satellite Television Extension and Localism Act (STELA). And the deadline for renewing the law is quickly approaching (Dec. 31). While sunsetting is, in the abstract, seemingly a good thing to ensure rules don’t become outdated, there are an interlocking set of interest groups who, generally speaking, only support reauthorizing the law because they are locked in a regulatory stalemate. STELA no longer represents an optimal outcome for many if not most of the affected parties. The time is now for finally allowing STELA to sunset, and using this occasion to further reform the underlying regulatory morass it is built upon.

Since the creation of STELA in 1988, much has changed in the marketplace. At the time of the 1992 Cable Act (the first year data from the FCC’s Video Competition Reports is available), cable providers served 95% of multichannel video subscribers. Now, the power of cable has waned to the extent that 2 of the top 4 multichannel video programming distributors (MVPDs) are satellite providers, without even considering the explosion in competition from online video distributors like Netflix and Amazon Prime.

Given these developments, Congress should reconsider whether STELA is necessary at all, along with the whole complex regulatory structure undergirding it, and consider the relative simplicity with which copyright and antitrust law are capable of adequately facilitating the market for broadcast content negotiations. An approach building upon that contemplated in the bipartisan Modern Television Act of 2019 by Congressman Steve Scalise (R-LA) and Congresswoman Anna Eshoo (D-CA)—which would repeal the compulsory license/retransmission consent regime for both cable and satellite—would be a step in the right direction.

A brief history of STELA

STELA, originally known as the 1988 Satellite Home Viewer Act, was originally justified as necessary to promote satellite competition against incumbent cable networks and to give satellite companies stronger negotiating positions against network broadcasters. In particular, the goal was to give satellite providers the ability to transmit terrestrial network broadcasts to subscribers. To do this, this regulatory structure modified the Communications Act and the Copyright Act. 

With the 1988 Satellite Home Viewer Act, Congress created a compulsory license for satellite retransmissions under Section 119 of the Copyright Act. This compulsory license provision mandated, just as the Cable Act did for cable providers, that satellite would have the right to certain network broadcast content in exchange for a government-set price (despite the fact that local network affiliates don’t necessarily own the copyrights themselves). The retransmission consent provision requires satellite providers (and cable providers under the Cable Act) to negotiate with network broadcasters for the fee to be paid for the right to network broadcast content. 

Alternatively, broadcasters can opt to impose must-carry provisions on cable and satellite  in lieu of retransmission consent negotiations. These provisions require satellite and cable operators to carry many channels from network broadcasters in order to have access to their content. As ICLE President Geoffrey Manne explained to Congress previously:

The must-carry rules require that, for cable providers offering 12 or more channels in their basic tier, at least one-third of these be local broadcast retransmissions. The forced carriage of additional, less-favored local channels results in a “tax on capacity,” and at the margins causes a reduction in quality… In the end, must-carry rules effectively transfer significant programming decisions from cable providers to broadcast stations, to the detriment of consumers… Although the ability of local broadcasters to opt in to retransmission consent in lieu of must-carry permits negotiation between local broadcasters and cable providers over the price of retransmission, must-carry sets a floor on this price, ensuring that payment never flows from broadcasters to cable providers for carriage, even though for some content this is surely the efficient transaction.

The essential question about the reauthorization of STELA regards the following provisions: 

  1. an exemption from retransmission consent requirements for satellite operators for the carriage of distant network signals to “unserved households” while maintaining the compulsory license right for those signals (modification of the compulsory license/retransmission consent regime);
  2. the prohibition on exclusive retransmission consent contracts between MVPDs and network broadcasters (per se ban on a business model); and
  3. the requirement that television broadcast stations and MVPDs negotiate in good faith (nebulous negotiating standard reviewed by FCC).

This regulatory scheme was supposed to sunset after 5 years. Instead of actually sunsetting, Congress has consistently reauthorized STELA ( in 1994, 1999, 2004, 2010, and 2014).

Each time, satellite companies like DirecTV & Dish Network, as well as interest groups representing rural customers who depend heavily on satellite for cable television, strongly supported the renewal of the legislation. Over time, though, the reauthorization has led to amendments supported by major players from each side of the negotiating table and broad support for what is widely considered “must-pass” legislation. In other words, every affected industry found something they liked about the compromise legislation.

As it stands, the sunset provision of STELA has meant that it gives each side negotiating leverage during the next round of reauthorization talks, and often concessions are drawn. But rather than simplifying this regulatory morass, STELA reauthorization simply extends rules that have outlived their purpose.

Current marketplace competition undermines the necessity of STELA reauthorization

The marketplace is very different in 2019 than it was when STELA’s predecessors were adopted and reauthorized. No longer is it the case that cable dominates and that satellite and other providers need a leg up just to compete. Moreover, there are now services that didn’t even exist when the STELA framework was first developed. Competition is thriving.


4.Dish9,905,000Dish NetworkSatellite
5.Verizon Fios TV4,451,000VerizonFiber-Optic
6.Cox Cable TV4,015,000Cox EnterprisesCable
7.U-Verse TV3,704,000AT&TFiber-Optic
8.Optimum/Suddenlink3,307,500Altice USACable
9.Sling TV*2,417,000Dish NetworkLive Streaming
10.Hulu with Live TV2,000,000Hulu(Disney, Comcast, AT&T)Live Streaming
11.DirecTV Now1,591,000AT&TLive Streaming
12.YouTube TV1,000,000Google(Alphabet)Live Streaming
13.Frontier FiOS838,000FrontierFiber-Optic
15.PlayStation Vue500,000SonyLive Streaming
16.CableOne Cable TV326,423Cable OneCable
17.FuboTV250,000FuboTVLive Streaming

A 2018 accounting of the largest MVPDs by subscribers shows that satellite is 2 of the top 4, and that over-the-top services like Sling TV, Hulu with LiveTV, and YouTube TV are gaining significantly. And this does not even consider (non-live) streaming services such as Netflix (approximately 60 million US subscribers), Hulu (about 28 million US subscribers) and Amazon Prime Video (which has about 40 million users in the US). It is not clear from these numbers that satellite needs special rules in order to compete with cable, or that the complex regulatory regime underlying STELA is necessary anymore.

On the contrary, there seems to be a lot of reason to believe that content is king, and the market for the distribution of that content is thriving. Competition among platforms is intense, not only among MVPDs like Comcast, DirecTV, Charter, and Dish Network, but from streaming services like Netflix, Amazon Prime Video, Hulu, and HBONow. Distribution networks heavily invest in exclusive content to attract consumers. There is no reason to think that we need selective forbearance from the byzantine regulations in this space in order to promote satellite adoption when satellite companies are just as good as any at contracting for high-demand content (for instance DirecTV with NFL Sunday Ticket). 

A better way forward: Streamlined regulation in the form of copyright and antitrust

As Geoffrey Manne said in his Congressional testimony on STELA reauthorization back in 2013: 

behind all these special outdated regulations are laws of general application that govern the rest of the economy: antitrust and copyright. These are better, more resilient rules. They are simple rules for a complex world. They will stand up far better as video technology evolves–and they don’t need to be sunsetted.

Copyright law establishes clearly defined rights, thereby permitting efficient bargaining between content owners and distributors. But under the compulsory license system, the copyright holders’ right to a performance license is fundamentally abridged. Retransmission consent normally requires fees to be paid for the content that MVPDs have available to them. But STELA exempts certain network broadcasts (“distant signals” for “unserved households”) from retransmission consent requirements. This reduces incentives to develop content subject to STELA, which at the margin harms both content creators and viewers. It also gives satellite an unfair advantage vis-a-vis cable in those cases it does not need to pay ever-rising retransmission consent fees. Ironically, it also reduces the incentive for satellite providers (DirecTV, at least) to work to provide local content to some rural consumers. Congress should reform the law to allow copyright holders to have their full rights under the Copyright Act again. Congress should also repeal the compulsory license and must-carry provisions that work at cross-purposes and allow true marketplace negotiations.

The initial allocation of property rights guaranteed under copyright law would allow for MVPDs, including satellite providers, to negotiate with copyright holders for content, and thereby realize a more efficient set of content distribution outcomes than is otherwise possible. Under the compulsory license/retransmission consent regime underlying both STELA and the Cable Act, the outcomes at best approximate those that would occur through pure private ordering but in most cases lead to economically inefficient results because of the thumb on the scale in favor of the broadcasters. 

In a similar way, just as copyright law provides a superior set of bargaining conditions for content negotiation, antitrust law provides a superior mechanism for policing potentially problematic conduct between the firms involved. Under STELA, the FCC polices transactions with a “good faith” standard. In an important sense, this ambiguous regulatory discretion provides little information to prospective buyers and sellers of licenses as to what counts as “good faith” negotiations (aside from the specific practices listed).

By contrast, antitrust law, guided by the consumer welfare standard and decades of case law, is designed both to deter potential anticompetitive foreclosure and also to provide a clear standard for firms engaged in the marketplace. The effect of relying on antitrust law to police competitive harms is — as the name of the standard suggest — a net increase in the welfare of consumers, the ultimate beneficiaries of a well functioning market. 

For instance, consider a hypothetical dispute between a network broadcaster and a satellite provider. Under the FCC’s “good faith” oversight, bargaining disputes, which are increasingly resulting in blackouts, are reviewed for certain negotiating practices deemed to be unfair, 47 CFR § 76.65(b)(1), and by a more general “totality of the circumstances” standard, 47 CFR § 76.65(b)(2). This is both over- and under-inclusive as the negotiating practices listed in (b)(1) may have procompetitive benefits in certain circumstances, and the (b)(2) totality of the circumstances standard is vague and ill-defined. By comparison, antitrust claims would be adjudicated through a foreseeable process with reference to a consumer welfare standard illuminated by economic evidence and case law.

If a satellite provider alleges anticompetitive foreclosure by a refusal to license, its claims would be subject to analysis under the Sherman Act. In order to prove its case, it would need to show that the network broadcaster has power in a properly defined market and is using that market power to foreclose competition by leveraging its ownership over network content to the detriment of consumer welfare. A court would then analyze whether this refusal of a duty to deal is a violation of antitrust law under the Trinko and Aspen Skiing standards. Economic evidence would need to be introduced that supports the allegation. 

And, critically, in this process, the defendants would be entitled to raise evidence in their case — both evidence suggesting that there was no foreclosure, as well as evidence of procompetitive justifications for decisions that otherwise may be considered foreclosure. Ultimately, a court, bound by established, nondiscretionary standards would weigh the evidence and make a determination. It is, of course, possible, that a review for “good faith” conduct could reach the correct result, but there is simply not a similarly rigorous process available to consistently push it in that direction.

The above-mentioned Modern Television Act of 2019 does represent a step in the right direction, as it would repeal the compulsory license/retransmission consent regime applied to both cable and satellite operators. However, it is imperfect as it does leave must carry requirements in place for local content and retains the “good faith” negotiating standard to be enforced by the FCC. 

Expiration is better than the status quo even if fundamental reform is not possible

Some scholars who have written on this issue, and very much agree that fundamental reform is needed, nonetheless argue that STELA should be renewed if more fundamental reforms like those described above can’t be achieved. For instance, George Ford recently wrote that 

With limited days left in the legislative calendar before STELAR expires, there is insufficient time for a sensible solution to this complex issue. Senate Commerce Committee Chairman Roger Wicker (R-Miss.) has offered a “clean” STELAR reauthorization bill to maintain the status quo, which would provide Congress with some much-needed breathing room to begin tackling the gnarly issue of how broadcast signals can be both widely retransmitted and compensated. Congress and the Trump administration should welcome this opportunity.

However, even in a world without more fundamental reform, it is not clear that satellite needs distant signals in order to compete with cable. The number of “short markets”—i.e. those without access to all four local network broadcasts—implicated by the loss of distant signals is relatively few. Regardless of how bad the overall regulatory scheme needs to be updated, it makes no sense to continue to preserve STELA’s provisions that benefit satellite when it is no longer necessary on competition grounds.


Congress should not only let STELA sunset, but it should consider reforming the entire compulsory license/retransmission consent regime as the Modern Television Act of 2019 aims to do. In fact, reformers should look to go even further in repealing must-carry provisions and the good faith negotiating standard enforced by the FCC. Copyright and antitrust law are much better rules for this constantly evolving space than the current sector-specific rules. 

For previous work from ICLE on STELA see The Future of Video Marketplace Regulation (written testimony of ICLE President Geoffrey Manne from June 12, 2013) and Joint Comments of ICLE and TechFreedom, In the Matter of STELA Reauthorization and Video Programming Reform (March 19, 2014). 

Today, I filed a regulatory comment in the FTC’s COPPA Rule Review on behalf of the International Center for Law & Economics. Building on prior work, I argue the FTC’s 2013 amendments to the COPPA Rule should be repealed. 

The amendments ignored the purpose of COPPA by focusing on protecting children from online targeted advertising rather than protecting children from online predators, as the drafters had intended. The amendment to the definition of personal information to include “persistent identifiers” by themselves is inconsistent with the statute’s text. The legislative history is explicit in identifying the protection of children from online predators as a purpose of COPPA, but there is nothing in the statute or the legislative history that states a purpose is to protect children from online targeted advertising.

The YouTube enforcement action and the resulting compliance efforts by YouTube will make the monetization of children-friendly content very difficult. Video game creators, family vloggers, toy reviewers, children’s apps, and educational technology will all be implicated by the changes on YouTube’s platform. The economic consequences are easy to predict: there will likely be less zero-priced family-friendly content available.

The 2013 amendments have uncertain benefits to children’s privacy. While some may feel there is a benefit to having less targeted advertising towards children, there is also a cost in restricting the ability of children’s content creators to monetize their work. The FTC should not presume parents do not balance costs and benefits about protecting their children from targeted advertising and often choose to allow their kids to use YouTube and apps on devices they bought for them.

The full comments are here.

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.


At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

A spate of recent newspaper investigations and commentary have focused on Apple allegedly discriminating against rivals in the App Store. The underlying assumption is that Apple, as a vertically integrated entity that operates both a platform for third-party apps and also makes it own apps, is acting nefariously whenever it “discriminates” against rival apps through prioritization, enters into popular app markets, or charges a “tax” or “surcharge” on rival apps. 

For most people, the word discrimination has a pejorative connotation of animus based upon prejudice: racism, sexism, homophobia. One of the definitions you will find in the dictionary reflects this. But another definition is a lot less charged: the act of making or perceiving a difference. (This is what people mean when they say that a person has a discriminating palate, or a discriminating taste in music, for example.)

In economics, discrimination can be a positive attribute. For instance, effective price discrimination can result in wealthier consumers paying a higher price than less well off consumers for the same product or service, and it can ensure that products and services are in fact available for less-wealthy consumers in the first place. That would seem to be a socially desirable outcome (although under some circumstances, perfect price discrimination can be socially undesirable). 

Antitrust law rightly condemns conduct only when it harms competition and not simply when it harms a competitor. This is because it is competition that enhances consumer welfare, not the presence or absence of a competitor — or, indeed, the profitability of competitors. The difficult task for antitrust enforcers is to determine when a vertically integrated firm with “market power” in an upstream market is able to effectively discriminate against rivals in a downstream market in a way that harms consumers

Even assuming the claims of critics are true, alleged discrimination by Apple against competitor apps in the App Store may harm those competitors, but it doesn’t necessarily harm either competition or consumer welfare.

The three potential antitrust issues facing Apple can be summarized as:

There is nothing new here economically. All three issues are analogous to claims against other tech companies. But, as I detail below, the evidence to establish any of these claims at best represents harm to competitors, and fails to establish any harm to the competitive process or to consumer welfare.


Antitrust enforcers have rejected similar prioritization claims against Google. For instance, rivals like Microsoft and Yelp have funded attacks against Google, arguing the search engine is harming competition by prioritizing its own services in its product search results over competitors. As ICLE and affiliated scholars have pointed out, though, there is nothing inherently harmful to consumers about such prioritization. There are also numerous benefits in platforms directly answering queries, even if it ends up directing users to platform-owned products or services.

As Geoffrey Manne has observed:

there is good reason to believe that Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to vigorously compete and to decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content to partially displace the original “ten blue links” design of its search results page and offer its own answers to users’ queries in its stead. 

Here, the antitrust case against Apple for prioritization is similarly flawed. For example, as noted in a recent article in the WSJ, users often use the App Store search in order to find apps they already have installed:

“Apple customers have a very strong connection to our products and many of them use search as a way to find and open their apps,” Apple said in a statement. “This customer usage is the reason Apple has strong rankings in search, and it’s the same reason Uber, Microsoft and so many others often have high rankings as well.” 

If a substantial portion of searches within the App Store are for apps already on the iPhone, then showing the Apple app near the top of the search results could easily be consumer welfare-enhancing. 

Apple is also theoretically leaving money on the table by prioritizing its (already pre-loaded) apps over third party apps. If its algorithm promotes its own apps over those that may earn it a 30% fee — additional revenue — the prioritization couldn’t plausibly be characterized as a “benefit” to Apple. Apple is ultimately in the business of selling hardware. Losing customers of the iPhone or iPad by prioritizing apps consumers want less would not be a winning business strategy.

Further, it stands to reason that those who use an iPhone may have a preference for Apple apps. Such consumers would be naturally better served by seeing Apple’s apps prioritized over third-party developer apps. And if consumers do not prefer Apple’s apps, rival apps are merely seconds of scrolling away.

Moreover, all of the above assumes that Apple is engaging in sufficiently pervasive discrimination through prioritzation to have a major impact on the app ecosystem. But substantial evidence exists that the universe of searches for which Apple’s algorithm prioritizes Apple apps is small. For instance, most searches are for branded apps already known by the searcher:

Keywords: how many are brands?

  • Top 500: 58.4%
  • Top 400: 60.75%
  • Top 300: 68.33%
  • Top 200: 80.5%
  • Top 100: 86%
  • Top 50: 90%
  • Top 25: 92%
  • Top 10: 100%

This is corroborated by data from the NYT’s own study, which suggests Apple prioritized its own apps first in only roughly 1% of the overall keywords queried: 

Whatever the precise extent of increase in prioritization, it seems like any claims of harm are undermined by the reality that almost 99% of App Store results don’t list Apple apps first. 

The fact is, very few keyword searches are even allegedly affected by prioritization. And the algorithm is often adjusting to searches for apps already pre-loaded on the device. Under these circumstances, it is very difficult to conclude consumers are being harmed by prioritization in search results of the App Store.


The issue of Apple building apps to compete with popular apps in its marketplace is similar to complaints about Amazon creating its own brands to compete with what is sold by third parties on its platform. For instance, as reported multiple times in the Washington Post:

Clue, a popular app that women use to track their periods, recently rocketed to the top of the App Store charts. But the app’s future is now in jeopardy as Apple incorporates period and fertility tracking features into its own free Health app, which comes preinstalled on every device. Clue makes money by selling subscriptions and services in its free app. 

However, there is nothing inherently anticompetitive about retailers selling their own brands. If anything, entry into the market is normally procompetitive. As Randy Picker recently noted with respect to similar claims against Amazon: 

The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws… I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

If anything, Apple is in an even better position than Amazon. Apple invests revenue in app development, not because the apps themselves generate revenue, but because it wants people to use the hardware, i.e. the iPhones, iPads, and Apple Watches. The reason Apple created an App Store in the first place is because this allows Apple to make more money from selling devices. In order to promote security on those devices, Apple institutes rules for the App Store, but it ultimately decides whether to create its own apps and provide access to other apps based upon its desire to maximize the value of the device. If Apple chooses to create free apps in order to improve iOS for users and sell more hardware, it is not a harm to competition.

Apple’s ability to enter into popular app markets should not be constrained unless it can be shown that by giving consumers another choice, consumers are harmed. As noted above, most searches in the App Store are for branded apps to begin with. If consumers already know what they want in an app, it hardly seems harmful for Apple to offer — and promote — its own, additional version as well. 

In the case of Clue, if Apple creates a free health app, it may hurt sales for Clue. But it doesn’t hurt consumers who want the functionality and would prefer to get it from Apple for free. This sort of product evolution is not harming competition, but enhancing it. And, it must be noted, Apple doesn’t exclude Clue from its devices. If, indeed, Clue offers a better product, or one that some users prefer, they remain able to find it and use it.

The so-called App Store “Tax”

The argument that Apple has an unfair competitive advantage over rival apps which have to pay commissions to Apple to be on the App Store (a “tax” or “surcharge”) has similarly produced no evidence of harm to consumers. 

Apple invested a lot into building the iPhone and the App Store. This infrastructure has created an incredibly lucrative marketplace for app developers to exploit. And, lest we forget a point fundamental to our legal system, Apple’s App Store is its property

The WSJ and NYT stories give the impression that Apple uses its commissions on third party apps to reduce competition for its own apps. However, this is inconsistent with how Apple charges its commission

For instance, Apple doesn’t charge commissions on free apps, which make up 84% of the App Store. Apple also doesn’t charge commissions for apps that are free to download but are supported by advertising — including hugely popular apps like Yelp, Buzzfeed, Instagram, Pinterest, Twitter, and Facebook. Even apps which are “readers” where users purchase or subscribe to content outside the app but use the app to access that content are not subject to commissions, like Spotify, Netflix, Amazon Kindle, and Audible. Apps for “physical goods and services” — like Amazon, Airbnb, Lyft, Target, and Uber — are also free to download and are not subject to commissions. The class of apps which are subject to a 30% commission include:

  • paid apps (like many games),
  • free apps that then have in-app purchases (other games and services like Skype and TikTok), 
  • and free apps with digital subscriptions (Pandora, Hulu, which have 30% commission first year and then 15% in subsequent years), and
  • cross-platform apps (Dropbox, Hulu, and Minecraft) which allow for digital goods and services to be purchased in-app and Apple collects commission on in-app sales, but not sales from other platforms. 

Despite protestations to the contrary, these costs are hardly unreasonable: third party apps receive the benefit not only of being in Apple’s App Store (without which they wouldn’t have any opportunity to earn revenue from sales on Apple’s platform), but also of the features and other investments Apple continues to pour into its platform — investments that make the ecosystem better for consumers and app developers alike. There is enormous value to the platform Apple has invested in, and a great deal of it is willingly shared with developers and consumers.  It does not make it anticompetitive to ask those who use the platform to pay for it. 

In fact, these benefits are probably even more important for smaller developers rather than bigger ones who can invest in the necessary back end to reach consumers without the App Store, like Netflix, Spotify, and Amazon Kindle. For apps without brand reputation (and giant marketing budgets), the ability for consumers to trust that downloading the app will not lead to the installation of malware (as often occurs when downloading from the web) is surely essential to small developers’ ability to compete. The App Store offers this.

Despite the claims made in Spotify’s complaint against Apple, Apple doesn’t have a duty to deal with app developers. Indeed, Apple could theoretically fill the App Store with only apps that it developed itself, like Apple Music. Instead, Apple has opted for a platform business model, which entails the creation of a new outlet for others’ innovation and offerings. This is pro-consumer in that it created an entire marketplace that consumers probably didn’t even know they wanted — and certainly had no means to obtain — until it existed. Spotify, which out-competed iTunes to the point that Apple had to go back to the drawing board and create Apple Music, cannot realistically complain that Apple’s entry into music streaming is harmful to competition. Rather, it is precisely what vigorous competition looks like: the creation of more product innovation, lower prices, and arguably (at least for some) higher quality.

Interestingly, Spotify is not even subject to the App Store commission. Instead, Spotify offers a work-around to iPhone users to obtain its premium version without ads on iOS. What Spotify actually desires is the ability to sell premium subscriptions to Apple device users without paying anything above the de minimis up-front cost to Apple for the creation and maintenance of the App Store. It is unclear how many potential Spotify users are affected by the inability to directly buy the ad-free version since Spotify discontinued offering it within the App Store. But, whatever the potential harm to Spotify itself, there’s little reason to think consumers or competition bear any of it. 


There is no evidence that Apple’s alleged “discrimination” against rival apps harms consumers. Indeed, the opposite would seem to be the case. The regulatory discrimination against successful tech platforms like Apple and the App Store is far more harmful to consumers.

The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.

According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.

COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.

COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old). 

As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube. 

YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels. 

Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?

COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.

Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.

When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.

On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:

[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….

At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:

This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.

This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children. 

Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.

I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)

As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”

This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.

Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.

In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).

The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.

The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.

What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.

Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.

As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.


Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.