Archives For Telecommunications

As the U.S. House Energy and Commerce Subcommittee on Oversight and Investigations convenes this morning for a hearing on overseeing federal funds for broadband deployment, it bears mention that one of the largest U.S. broadband-subsidy programs is actually likely run out of money within the next year. Writing in Forbes, Roslyn Layton observes of the Affordable Connectivity Program (ACP) that it has enrolled more than 14 million households, concluding that it “may be the most effective broadband benefit program to date with its direct to consumer model.”

This may be true, but how should we measure effectiveness? One seemingly simple measure would be the number of households with at-home internet access who would not have it but for the ACP’s subsidies. Those households can be broadly divided into two groups:

  1. Households that signed up for ACP and got at-home internet; and
  2. Households that have at-home internet, but wouldn’t if they didn’t receive the ACP subsidies.

Conceptually, evaluating the first group is straightforward. We can survey ACP subscribers and determine whether they had internet access before receiving the ACP subsidies. The second group is much more difficult, if not impossible, to measure with the available information. We can only guess as to how many households would unsubscribe if the subsidies went away.

To give a bit of background on the program we now call the ACP: broadband has been included since 2016 as a supported service under the Federal Communication Commission’s (FCC) Lifeline program. Among the Lifeline program’s goals are to ensure the availability of broadband for low-income households (to close the so-called “digital divide”) and to minimize the Universal Service Fund contribution burden levied on consumers and businesses.

As part of the appropriations act enacted in 2021 in response to the COVID-19 pandemic, Congress created a temporary $3.2 billion Emergency Broadband Benefit (EBB) program within the Lifeline program. EBB provided eligible households with a $50 monthly discount on qualifying broadband service or bundled voice-broadband packages purchased from participating providers, as well as a one-time discount of up to $100 for the purchase of a device (computer or tablet). The EBB program was originally set to expire when the funds were depleted, or six months after the U.S. Department of Health and Human Services (HHS) declared an end to the pandemic.

With passage of the Infrastructure Investment and Jobs Act (IIJA) in November 2021, the EBB’s temporary subsidy was extended indefinitely and renamed the Affordable Connectivity Program, or ACP. The IIJA allocated an additional $14 billion to provide subsidies of $30 a month to eligible households. Without additional appropriations, the ACP is expected to run out of funding by early 2024.

The Case of the Nonadopters

According to Information Technology & Innovation Foundation (ITIF), 97.6% of the U.S. population has access to a fixed connection of at least 25/3 Mbps through asymmetric digital subscriber line (ADSL), cable, fiber, or fixed wireless. Pew Research reports that 93% of its survey respondents indicated they have a broadband connection at home.

Pew’s results are in-line with U.S. Census estimates from the American Community Survey. The figure below, summarizing information from 2021, shows that 92.6% of households had a broadband subscription or had access without having to pay for a subscription. Assuming ITIF’s estimates of broadband availability are accurate, then among households without broadband, approximately two-thirds of them—6.4 million—have access to broadband.

On the one hand, price is obviously a major factor driving adoption. For example, among the 7.4% of households who do not use the internet at home, Census surveys show about one-third indicate that price is one reason for not having an at-home connection, responding that they “can’t afford it” or that it’s “not worth the cost.” On the other hand, more than half of respondents said they “don’t need it” or are “not interested.”

But George Ford argues that these responses to the Census surveys are unhelpful in evaluating the importance of price relative to other factors. For example, if a consumer says broadband is “not worth the cost,” it’s not clear whether the “worth” is too low or the “cost” is too high. Consumers who are “not interested” in subscribing to an internet service are implicitly saying that they are not interested at current prices. In other words, there may be a price that is sufficiently low that uninterested consumers become interested.

But in some cases, that price may be zero—or even negative.

A 2022 National Telecommunications and Information Administration (NTIA) survey of internet use found that about 75% of offline households said they wanted to pay nothing for internet access. In addition, as shown in the figure above, about a quarter of households without a broadband or smartphone subscription claim that they can access the internet at home without paying for a subscription. Thus, there may be a substantial share of nonadopters who would not adopt even if the service were free to the consumer.

Aside from surveys, another way to evaluate the importance of price in internet-adoption decisions is with empirical estimates of demand elasticity. The price elasticity of demand is the percent change in the quantity demanded for a good, divided by the percent change in price. A demand curve with an elasticity between 0 and –1 is said to be inelastic, meaning the change in the quantity demanded is relatively less responsive to changes in price. An elasticity of less than –1 is said to be elastic, meaning the change in the quantity demanded is relatively more responsive to changes in price.

Michael Williams and Wei Zao’s survey of the research on the price elasticity of demand concludes that demand for internet services has traditionally been inelastic and has “become increasingly so over time.” They report a 2019 elasticity of –0.05 (down from –0.69 in 2008). George Ford’s 2021 study estimates an elasticity ranging from –0.58 to –0.33.  These results indicate that a subsidy program that reduced the price of internet services by 10% would increase adoption by anywhere from 0.5% (i.e., one-half of one percent) to 5.8%. In other words, a range from approximately zero to a small but significant increase.

It is unsurprising that the demand for internet services is so inelastic, especially among those who do not subscribe to broadband or smartphone service. One reason is the nature of demand curves. Generally speaking, as quantity demanded increases (i.e., moves downward along the demand curve), the demand curve becomes less elastic, as shown in the figure below (which is an illustration of a hypothetical demand curve). With adoption currently at more than 90% of households, the remaining nonadopters are much less likely to adopt at any price.

Thus, there is a possibility that the ACP may be so successful that the program has hit a point of significant diminishing marginal returns. Now that nearly 95% of U.S. households with access to at-home internet use at-home Internet, it may be very difficult and costly to convert the remaining 5% of nonadopters. For example, if Williams & Zao’s estimate of a price elasticity of –0.05 is correct, then even a subsidy that provided “free” Internet would convert only half of the 5% of nonadopters.

Keeping the Country Connected

With all of this in mind, it’s important to recognize that the primary metric for success should not be solely based on adoption rates.

The ACP is not an attempt to create a perfect government program, but rather to address the imperfect realities we face. Some individuals may never adopt internet services, just as some never installed home-telephone services. Even at the peak of landline use in 1998, only 96.2% of households had one.

On the other hand, those who value broadband access may be forced to discontinue service if faced with financial difficulties. Therefore, the program’s objective should encompass both connecting new users and ensuring that economically vulnerable individuals maintain access.

Instead of pursuing an ideal regulatory or subsidy program, we should focus on making the most informed decisions in a context where information is limited. We know there is general demand for internet access and that a significant number of households might discontinue services during economic downturns. And we also know that, in light of these realities, numerous stakeholders advocate for invasive interventions in the broadband market, potentially jeopardizing private investment incentives.

Thus, even if the ACP program is not perfect in itself, it goes a long way toward satisfying the need to make sure the least well-off stay connected, while also allowing private providers to continue their track record of providing high-speed, affordable broadband.

And although we do not have data at the moment demonstrating exactly how many households would discontinue internet service in the absence of subsidies, if Congress does not appropriate additional ACP funds, we may soon have an unfortunate natural experiment that helps us to find out.

Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.

Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.

The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.

In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Times reports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.

One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:

That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.

It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.

“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”

This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.

Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?

It’s not a simple question to answer.

In comments to the FCC, the International Center for Law & Economics (ICLE) notes:

The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.

This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:

A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.

To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.

What should a government do when it owns geese that lay golden eggs? Should it sell the geese to fund government programs? Or should it let them run wild so everyone can have a chance at a golden egg? 

That’s the question facing Congress as it considers re-authorizing the Federal Communications Commission’s (FCC’s) authority to auction and license spectrum. Should the FCC auction spectrum to maximize government revenue? Or, should it allow large portions to remain unlicensed to foster innovation and development?

The complication in this regard is that auction revenues play an outsized role in federal lawmakers’ deliberations about spectrum policy. Indeed, spectrum auctions have been wildly successful in generating revenue for the federal government. But the size of direct federal revenues are not necessarily a perfect gauge of the overall social welfare generated by particular policy choices.

As it considers future spe​​ctrum reauthorization, Congress needs to take a balanced approach that includes concern for federal revenues, but also considers the much larger social welfare that is created when diverse users in various situations can access services enabled by both licensed and unlicensed spectrum.

Licenced, Unlicensed, & Shared Spectrum

Most spectrum is licensed by the FCC to certain users. Licensees pay fees to the FCC for the exclusive right to transmit on an assigned frequency within a given geographical area. A license holder has the right to exclude others from accessing the assigned frequency and to be free from harmful interference from other service providers. In the private sector, radio and television broadcasters, as well as mobile-phone services, operate with licensed spectrum. Their right to exclude others and to be free from interference provides improved service and greater reliability in distributing their broadcasts or providing communication services.

SOURCE: U.S. Commerce Department

Licensing gets spectrum into the hands of those who are well-positioned—both technologically and financially—to deploy spectrum for commercial uses. Because a licensee has the right to exclude other operators from the licensed band, licensing offers the operator flexibility to deploy their network in ways that effectively mitigate potential interference. In addition, the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending. Spectrum auctions have reportedly raised more than $230 billion for the U.S. Treasury since their inception.

Unlicensed spectrum can be seen as an open-access resource available to all users without charge. Users are free to use as much of this spectrum as they wish, so long as it’s with FCC-certified equipment operating at authorized power levels. The most well-known example of unlicensed operations is Wi-Fi, a service that operates in the 2.4 GHz, and 5.8 GHz bands and is employed by millions of U.S. users across millions of devices in millions of locations each day. Wi-Fi isn’t the only use for unlicensed spectrum; it covers a range of devices such as those relying on Bluetooth, as well as personal medical devices, appliances, and a wide range of Internet-of-Things devices. 

As with any common resource, each user’s service-quality experience depends on how much spectrum is used by all. In particular, if the demand for spectrum at a particular place and point in time exceeds the available supply, then all users will experience diminished service quality. If you’ve been in a crowded coffee shop and complained that “the Internet sucks here,” it’s more than likely that demand for the shop’s Wi-Fi service is greater than the capacity of the Wi-Fi router.

SOURCE: Wall Street Journal

While there can be issues of interference among wireless devices, it’s not the Wild West. Equipment and software manufacturers have invested in developing technologies that work in noisy environments and in proximity to other products. The existence of sufficient unlicensed and shared spectrum allows for innovation with new technologies and services. Firms don’t have to make large upfront investments in licenses to research, develop, and experiment with their innovations. These innovations benefit consumers, businesses, and manufacturers. According to the Wi-Fi Alliance, the success of Wi-Fi has been enormous:

The United States remains one of the countries with the widest Wi-Fi adoption and use. Cisco estimates 33.5 million paid Wi-Fi access points, with estimates for free public Wi-Fi sites at around 18.6 million. Eighty-five percent of United States broadband subscribers have Wi-Fi capability at home, and mobile users connect to the internet through Wi-Fi over cellular networks more than 55 percent of the time. The United States also has a robust manufacturing ecosystem and increasing enterprise use, which have aided the rise in the value of Wi-Fi. The total economic value of Wi-Fi in 2021 is $995 billion.

The Need for Balanced Spectrum Policy

To be sure, both licensed and unlicensed spectrum play crucial roles and serve different purposes, sometimes as substitutes for one another and sometimes as complements. It can’t therefore be said that one approach is “better” than the other, as there is undeniable economic value to both.

That’s why it’s been said that the optimal amount of unlicensed spectrum is somewhere between 0% and 100%. While that’s true, it’s unhelpful as a guide for policymakers, even if it highlights the challenges they face. Not only must they balance the competing interests of consumers, wireless providers, and electronics manufacturers, but they also have to keep their own self-interest in check, insofar as they are forever tempted to use spectrum auctions to raise revenue.

To this last point, it is likely that the “optimum” amount of unlicensed spectrum for society differs significantly from the amount that maximizes government auction revenues.

For simplicity, let’s assume “consumer welfare” is a shorthand for social welfare less government-auction revenues. In the (purely hypothetical) figure below, consumer welfare is maximized when about 56% of the available spectrum is licensed. Government auction revenues, however, are maximized when all available spectrum is licensed.

SOURCE: Authors

In this example, politicians have a keen interest in licensing more spectrum than is socially optimal. Doing so provides more revenues to the government without raising taxes. The additional costs passed on to individual consumers (or voters) would be so disperse as to be virtually undetectable. It’s a textbook case of concentrated benefits and diffused costs.

Of course, we can debate about the size, shape, and position of each of the curves, as well as where on the curve the United States currently sits. Nevertheless, available evidence indicates that the consumer welfare generated through use of unlicensed broadband will often exceed the revenue generated by spectrum auctions. For example, if the Wi-Fi Alliance’s estimate of $995 billion in economic value for Wi-Fi is accurate (or even in the ballpark), then the value of Wi-Fi alone is more than three times greater than the auction revenues received by the U.S. Treasury.

Of course, licensed-spectrum technology also provides tremendous benefit to society, but the basic basic point cannot be ignored: a congressional calculation that seeks simply to maximize revenue to the U.S. Treasury will almost certainly rob society of a great deal of benefit.

Conclusion

Licensed spectrum is obviously critical, and not just because it allows politicians to raise revenue for the federal government. Cellular technology and other licensed applications are becoming even more important as a wide variety of users opt for cellular-only Internet connections, or where fixed wireless over licensed spectrum is needed to reach remote users.

At the same time, shared and unlicensed spectrum has been a major success story, and promises to keep delivering innovation and greater connectivity in a wide variety of use cases.  As we note above, the federal revenue generated from auctions should not be the only benefit counted. Unlicensed spectrum is responsible for tens of billions of dollars in direct value, and close to $1 trillion when accounting for its indirect benefits.

Ultimately, allocating spectrum needs to be a question of what most enhances consumer welfare. Raising federal revenue is great, but it is only one benefit that must be counted among a number of benefits (and costs). Any simplistic formula that pushes for maximizing a single dimension of welfare is likely to be less than ideal. As Congress considers further spectrum reauthorization, it needs to take seriously the need to encourage both private ownership of licensed spectrum, as well as innovative uses of unlicensed and shared spectrum.

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

In an expected decision (but with a somewhat unexpected coalition), the U.S. Supreme Court has moved 5 to 4 to vacate an order issued early last month by the 5th U.S. Circuit Court of Appeals, which stayed an earlier December 2021 order from the U.S. District Court for the Western District of Texas enjoining Texas’ attorney general from enforcing the state’s recently enacted social-media law, H.B. 20. The law would bar social-media platforms with more than 50 million active users from engaging in “censorship” based on political viewpoint. 

The shadow-docket order serves to grant the preliminary injunction sought by NetChoice and the Computer & Communications Industry Association to block the law—which they argue is facially unconstitutional—from taking effect. The trade groups also are challenging a similar Florida law, which the 11th U.S. Circuit Court of Appeals last week ruled was “substantially likely” to violate the First Amendment. Both state laws will thus be stayed while challenges on the merits proceed. 

But the element of the Supreme Court’s order drawing the most initial interest is the “strange bedfellows” breakdown that produced it. Chief Justice John Roberts was joined by conservative Justices Brett Kavanaugh and Amy Coney Barrett and liberals Stephen Breyer and Sonia Sotomayor in moving to vacate the 5th Circuit’s stay. Meanwhile, Justice Samuel Alito wrote a dissent that was joined by fellow conservatives Clarence Thomas and Neil Gorsuch, and liberal Justice Elena Kagan also dissented without offering a written justification.

A glance at the recent history, however, reveals why it should not be all that surprising that the justices would not come down along predictable partisan lines. Indeed, when it comes to content moderation and the question of whether to designate platforms as “common carriers,” the one undeniably predictable outcome is that both liberals and conservatives have been remarkably inconsistent.

Both Sides Flip Flop on Common Carriage

Ever since Justice Thomas used his concurrence in 2021’s Biden v. Knight First Amendment Institute to lay out a blueprint for how states could regulate social-media companies as common carriers, states led by conservatives have been working to pass bills to restrict the ability of social media companies to “censor.” 

Forcing common carriage on the Internet was, not long ago, something conservatives opposed. It was progressives who called net neutrality the “21st Century First Amendment.” The actual First Amendment, however, protects the rights of both Internet service providers (ISPs) and social-media companies to decide the rules of the road on their own platforms.

Back in the heady days of 2014, when the Federal Communications Commission (FCC) was still planning its next moves on net neutrality after losing at the U.S. Court of Appeals for the D.C. Circuit the first time around, Geoffrey Manne and I at the International Center for Law & Economics teamed with Berin Szoka and Tom Struble of TechFreedom to write a piece for the First Amendment Law Review arguing that there was no exception that would render broadband ISPs “state actors” subject to the First Amendment. Further, we argued that the right to editorial discretion meant that net-neutrality regulations would be subject to (and likely fail) First Amendment scrutiny under Tornillo or Turner.

After the FCC moved to reclassify broadband as a Title II common carrier in 2015, then-Judge Kavanaugh of the D.C. Circuit dissented from the denial of en banc review, in part on First Amendment grounds. He argued that “the First Amendment bars the Government from restricting the editorial discretion of Internet service providers, absent a showing that an Internet service provider possesses market power in a relevant geographic market.” In fact, Kavanaugh went so far as to link the interests of ISPs and Big Tech (and even traditional media), stating:

If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

This was not a controversial view among free-market, right-of-center types at the time.

An interesting shift started to occur during the presidency of Donald Trump, however, as tensions between social-media companies and many on the right came to a head. Instead of seeing these companies as private actors with strong First Amendment rights, some conservatives began looking either for ways to apply the First Amendment to them directly as “state actors” or to craft regulations that would essentially make social-media companies into common carriers with regard to speech.

But Kavanaugh’s opinion in USTelecom remains the best way forward to understand how the First Amendment applies online today, whether regarding net neutrality or social-media regulation. Given Justice Alito’s view, expressed in his dissent, that it “is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” it is a fair bet that laws like those passed by Texas and Florida will get a hearing before the Court in the not-distant future. If Justice Kavanaugh’s opinion has sway among the conservative bloc of the Supreme Court, or is able to peel off justices from the liberal bloc, the Texas law and others like it (as well as net-neutrality regulations) will be struck down as First Amendment violations.

Kavanaugh’s USTelecom Dissent

In then-Judge Kavanaugh’s dissent, he highlighted two reasons he believed the FCC’s reclassification of broadband as Title II was unlawful. The first was that the reclassification decision was a “major question” that required clear authority delegated by Congress. The second, more important point was that the FCC’s reclassification decision was subject to the Turner standard. Under that standard, since the FCC did not engage—at the very least—in a market-power analysis, the rules could not stand, as they amounted to mandated speech.

The interesting part of this opinion is that it tracks very closely to the analysis of common-carriage requirements for social-media companies. Kavanaugh’s opinion offered important insights into:

  1. the applicability of the First Amendment right to editorial discretion to common carriers;
  2. the “use it or lose it” nature of this right;
  3. whether Turner’s protections depended on scarcity; and 
  4. what would be required to satisfy Turner scrutiny.

Common Carriage and First Amendment Protection

Kavanaugh found unequivocally that common carriers, such as ISPs classified under Title II, were subject to First Amendment protection under the Turner decisions:

The Court’s ultimate conclusion on that threshold First Amendment point was not obvious beforehand. One could have imagined the Court saying that cable operators merely operate the transmission pipes and are not traditional editors. One could have imagined the Court comparing cable operators to electricity providers, trucking companies, and railroads – all entities subject to traditional economic regulation. But that was not the analytical path charted by the Turner Broadcasting Court. Instead, the Court analogized the cable operators to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment. As Turner Broadcasting concluded, the First Amendment’s basic principles “do not vary when a new and different medium for communication appears” – although there of course can be some differences in how the ultimate First Amendment analysis plays out depending on the nature of (and competition in) a particular communications market. Brown v. Entertainment Merchants Association, 564 U.S. 786, 790 (2011) (internal quotation mark omitted).

Here, of course, we deal with Internet service providers, not cable television operators. But Internet service providers and cable operators perform the same kinds of functions in their respective networks. Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit. Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.

Indeed, some of the same entities that provide cable television service – colloquially known as cable companies – provide Internet access over the very same wires. If those entities receive First Amendment protection when they transmit television stations and networks, they likewise receive First Amendment protection when they transmit Internet content. It would be entirely illogical to conclude otherwise. In short, Internet service providers enjoy First Amendment protection of their rights to speak and exercise editorial discretion, just as cable operators do.

‘Use It or Lose It’ Right to Editorial Discretion

Kavanaugh questioned whether the First Amendment right to editorial discretion depends, to some degree, on how much the entity used the right. Ultimately, he rejected the idea forwarded by the FCC that, since ISPs don’t restrict access to any sites, they were essentially holding themselves out to be common carriers:

I find that argument mystifying. The FCC’s “use it or lose it” theory of First Amendment rights finds no support in the Constitution or precedent. The FCC’s theory is circular, in essence saying: “They have no First Amendment rights because they have not been regularly exercising any First Amendment rights and therefore they have no First Amendment rights.” It may be true that some, many, or even most Internet service providers have chosen not to exercise much editorial discretion, and instead have decided to allow most or all Internet content to be transmitted on an equal basis. But that “carry all comers” decision itself is an exercise of editorial discretion. Moreover, the fact that the Internet service providers have not been aggressively exercising their editorial discretion does not mean that they have no right to exercise their editorial discretion. That would be akin to arguing that people lose the right to vote if they sit out a few elections. Or citizens lose the right to protest if they have not protested before. Or a bookstore loses the right to display its favored books if it has not done so recently. That is not how constitutional rights work. The FCC’s “use it or lose it” theory is wholly foreign to the First Amendment.

Employing a similar logic, Kavanaugh also rejected the notion that net-neutrality rules were essentially voluntary, given that ISPs held themselves out as carrying all content.

Relatedly, the FCC claims that, under the net neutrality rule, an Internet service provider supposedly may opt out of the rule by choosing to carry only some Internet content. But even under the FCC’s description of the rule, an Internet service provider that chooses to carry most or all content still is not allowed to favor some content over other content when it comes to price, speed, and availability. That half-baked regulatory approach is just as foreign to the First Amendment. If a bookstore (or Amazon) decides to carry all books, may the Government then force the bookstore (or Amazon) to feature and promote all books in the same manner? If a newsstand carries all newspapers, may the Government force the newsstand to display all newspapers in the same way? May the Government force the newsstand to price them all equally? Of course not. There is no such theory of the First Amendment. Here, either Internet service providers have a right to exercise editorial discretion, or they do not. If they have a right to exercise editorial discretion, the choice of whether and how to exercise that editorial discretion is up to them, not up to the Government.

Think about what the FCC is saying: Under the rule, you supposedly can exercise your editorial discretion to refuse to carry some Internet content. But if you choose to carry most or all Internet content, you cannot exercise your editorial discretion to favor some content over other content. What First Amendment case or principle supports that theory? Crickets.

In a footnote, Kavanugh continued to lambast the theory of “voluntary regulation” forwarded by the concurrence, stating:

The concurrence in the denial of rehearing en banc seems to suggest that the net neutrality rule is voluntary. According to the concurrence, Internet service providers may comply with the net neutrality rule if they want to comply, but can choose not to comply if they do not want to comply. To the concurring judges, net neutrality merely means “if you say it, do it.”…. If that description were really true, the net neutrality rule would be a simple prohibition against false advertising. But that does not appear to be an accurate description of the rule… It would be strange indeed if all of the controversy were over a “rule” that is in fact entirely voluntary and merely proscribes false advertising. In any event, I tend to doubt that Internet service providers can now simply say that they will choose not to comply with any aspects of the net neutrality rule and be done with it. But if that is what the concurrence means to say, that would of course avoid any First Amendment problem: To state the obvious, a supposed “rule” that actually imposes no mandates or prohibitions and need not be followed would not raise a First Amendment issue.

Scarcity and Capacity to Carry Content

The FCC had also argued that there was a difference between ISPs and the cable companies in Turner in that ISPs did not face decisions about scarcity in content carriage. But Kavanaugh rejected this theory as inconsistent with the First Amendment’s right not to be compelled to carry a message or speech.

That argument, too, makes little sense as a matter of basic First Amendment law. First Amendment protection does not go away simply because you have a large communications platform. A large bookstore has the same right to exercise editorial discretion as a small bookstore. Suppose Amazon has capacity to sell every book currently in publication and therefore does not face the scarcity of space that a bookstore does. Could the Government therefore force Amazon to sell, feature, and promote every book on an equal basis, and prohibit Amazon from promoting or recommending particular books or authors? Of course not. And there is no reason for a different result here. Put simply, the Internet’s technological architecture may mean that Internet service providers can provide unlimited content; it does not mean that they must.

Keep in mind, moreover, why that is so. The First Amendment affords editors and speakers the right not to speak and not to carry or favor unwanted speech of others, at least absent sufficient governmental justification for infringing on that right… That foundational principle packs at least as much punch when you have room on your platform to carry a lot of speakers as it does when you have room on your platform to carry only a few speakers.

Turner Scrutiny and Bottleneck Market Power

Finally, Kavanaugh applied Turner scrutiny and found that, at the very least, it requires a finding of “bottleneck market power” that would allow ISPs to harm consumers. 

At the time of the Turner Broadcasting decisions, cable operators exercised monopoly power in the local cable television markets. That monopoly power afforded cable operators the ability to unfairly disadvantage certain broadcast stations and networks. In the absence of a competitive market, a broadcast station had few places to turn when a cable operator declined to carry it. Without Government intervention, cable operators could have disfavored certain broadcasters and indeed forced some broadcasters out of the market altogether. That would diminish the content available to consumers. The Supreme Court concluded that the cable operators’ market-distorting monopoly power justified Government intervention. Because of the cable operators’ monopoly power, the Court ultimately upheld the must-carry statute…

The problem for the FCC in this case is that here, unlike in Turner Broadcasting, the FCC has not shown that Internet service providers possess market power in a relevant geographic market… 

Rather than addressing any problem of market power, the net neutrality rule instead compels private Internet service providers to supply an open platform for all would-be Internet speakers, and thereby diversify and increase the number of voices available on the Internet. The rule forcibly reduces the relative voices of some Internet service and content providers and enhances the relative voices of other Internet content providers.

But except in rare circumstances, the First Amendment does not allow the Government to regulate the content choices of private editors just so that the Government may enhance certain voices and alter the content available to the citizenry… Turner Broadcasting did not allow the Government to satisfy intermediate scrutiny merely by asserting an interest in diversifying or increasing the number of speakers available on cable systems. After all, if that interest sufficed to uphold must-carry regulation without a showing of market power, the Turner Broadcasting litigation would have unfolded much differently. The Supreme Court would have had little or no need to determine whether the cable operators had market power. But the Supreme Court emphasized and relied on the Government’s market power showing when the Court upheld the must-carry requirements… To be sure, the interests in diversifying and increasing content are important governmental interests in the abstract, according to the Supreme Court But absent some market dysfunction, Government regulation of the content carriage decisions of communications service providers is not essential to furthering those interests, as is required to satisfy intermediate scrutiny.

In other words, without a finding of bottleneck market power, there would be no basis for satisfying the government interest prong of Turner.

Applying Kavanaugh’s Dissent to NetChoice v. Paxton

Interestingly, each of these main points arises in the debate over regulating social-media companies as common carriers. Texas’ H.B. 20 attempts to do exactly that, which is at the heart of the litigation in NetChoice v. Paxton.

Common Carriage and First Amendment Protection

To the first point, Texas attempts to claim in its briefs that social-media companies are common carriers subject to lesser First Amendment protection: “Assuming the platforms’ refusals to serve certain customers implicated First Amendment rights, Texas has properly denominated the platforms common carriers. Imposing common-carriage requirements on a business does not offend the First Amendment.”

But much like the cable operators before them in Turner, social-media companies are not simply carriers of persons or things like the classic examples of railroads, telegraphs, and telephones. As TechFreedom put it in its brief: “As its name suggests… ‘common carriage’ is about offering, to the public at large  and on indiscriminate terms, to carry generic stuff from point A to point B. Social media websites fulfill none of these elements.”

In a sense, it’s even clearer that social-media companies are not common carriers than it was in the case of ISPs, because social-media platforms have always had terms of service that limit what can be said and that even allow the platforms to remove users for violations. All social-media platforms curate content for users in ways that ISPs normally do not.

‘Use It or Lose It’ Right to Editorial Discretion

Just as the FCC did in the Title II context, Texas also presses the idea that social-media companies gave up their right to editorial discretion by disclaiming the choice to exercise it, stating: “While the platforms compare their business policies to classic examples of First Amendment speech, such as a newspaper’s decision to include an article in its pages, the platforms have disclaimed any such status over many years and in countless cases. This Court should not accept the platforms’ good-for-this-case-only characterization of their businesses.” Pointing primarily to cases where social-media companies have invoked Section 230 immunity as a defense, Texas argues they have essentially lost the right to editorial discretion.

This, again, flies in the face of First Amendment jurisprudence, as Kavanaugh earlier explained. Moreover, the idea that social-media companies have disclaimed editorial discretion due to Section 230 is inconsistent with what that law actually does. Section 230 allows social-media companies to engage in as much or as little content moderation as they so choose by holding the third-party speakers accountable rather than the platform. Social-media companies do not relinquish their First Amendment rights to editorial discretion because they assert an applicable defense under the law. Moreover, social-media companies have long had rules delineating permissible speech, and they enforce those rules actively.

Interestingly, there has also been an analogue to the idea forwarded in USTelecom that the law’s First Amendment burdens are relatively limited. As noted above, then-Judge Kavanaugh rejected the idea forwarded by the concurrence that net-neutrality rules were essentially voluntary. In the case of H.B. 20, the bill’s original sponsor recently argued on Twitter that the Texas law essentially incorporates Section 230 by reference. If this is true, then the rules would be as pointless as the net-neutrality rules would have been, because social-media companies would be free under Section 230(c)(2) to remove “otherwise objectionable” material under the Texas law.

Scarcity and Capacity to Carry Content

In an earlier brief to the 5th Circuit, Texas attempted to differentiate social-media companies from the cable company in Turner by stating there was no necessary conflict between speakers, stating “[HB 20] does not, for example, pit one group of speakers against another.” But this is just a different way of saying that, since social-media companies don’t face scarcity in their technical capacity to carry speech, they can be required to carry all speech. This is inconsistent with the right Kavanaugh identified not to carry a message or speech, which is not subject to an exception that depends on the platform’s capacity to carry more speech.

Turner Scrutiny and Bottleneck Market Power

Finally, Judge Kavanaugh’s application of Turner to ISPs makes clear that a showing of bottleneck market power is necessary before common-carriage regulation may be applied to social-media companies. In fact, Kavanaugh used a comparison to social-media sites and broadcasters as a reductio ad absurdum for the idea that one could regulate ISPs without a showing of market power. As he put it there:

Consider the implications if the law were otherwise. If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

Much like the FCC with its Open Internet Order, Texas did not make a finding of bottleneck market power in H.B. 20. Instead, Texas basically asked for the opportunity to get to discovery to develop the case that social-media platforms have market power, stating that “[b]ecause the District Court sharply limited discovery before issuing its preliminary injunction, the parties have not yet had the opportunity to develop many factual questions, including whether the platforms possess market power.” This simply won’t fly under Turner, which required a legislative finding of bottleneck market power that simply doesn’t exist in H.B. 20. 

Moreover, bottleneck market power means more than simply “market power” in an antitrust sense. As Judge Kavanaugh put it: “Turner Broadcasting seems to require even more from the Government. The Government apparently must also show that the market power would actually be used to disadvantage certain content providers, thereby diminishing the diversity and amount of content available.” Here, that would mean not only that social-media companies have market power, but they want to use it to disadvantage users in a way that makes less diverse content and less total content available.

The economics of multi-sided markets is probably the best explanation for why platforms have moderation rules. They are used to maximize a platform’s value by keeping as many users engaged and on those platforms as possible. In other words, the effect of moderation rules is to increase the amount of user speech by limiting harassing content that could repel users. This is a much better explanation for these rules than “anti-conservative bias” or a desire to censor for censorship’s sake (though there may be room for debate on the margin when it comes to the moderation of misinformation and hate speech).

In fact, social-media companies, unlike the cable operators in Turner, do not have the type of “physical connection between the television set and the cable network” that would grant them “bottleneck, or gatekeeper, control over” speech in ways that would allow platforms to “silence the voice of competing speakers with a mere flick of the switch.” Cf. Turner, 512 U.S. at 656. Even if they tried, social-media companies simply couldn’t prevent Internet users from accessing content they wish to see online; they inevitably will find such content by going to a different site or app.

Conclusion: The Future of the First Amendment Online

While many on both sides of the partisan aisle appear to see a stark divide between the interests of—and First Amendment protections afforded to—ISPs and social-media companies, Kavanaugh’s opinion in USTelecom shows clearly they are in the same boat. The two rise or fall together. If the government can impose common-carriage requirements on social-media companies in the name of free speech, then they most assuredly can when it comes to ISPs. If the First Amendment protects the editorial discretion of one, then it does for both.

The question then moves to relative market power, and whether the dominant firms in either sector can truly be said to have “bottleneck” market power, which implies the physical control of infrastructure that social-media companies certainly lack.

While it will be interesting to see what the 5th Circuit (and likely, the Supreme Court) ultimately do when reviewing H.B. 20 and similar laws, if now-Justice Kavanaugh’s dissent is any hint, there will be a strong contingent on the Court for finding the First Amendment applies online by protecting the right of private actors (ISPs and social-media companies) to set the rules of the road on their property. As Kavanaugh put it in Manhattan Community Access Corp. v. Halleck: “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Competition is the best way to protect consumers’ interests, not prophylactic government regulation.

With the 11th Circuit upholding the stay against Florida’s social-media law and the Supreme Court granting the emergency application to vacate the stay of the injunction in NetChoice v. Paxton, the future of the First Amendment appears to be on strong ground. There is no basis to conclude that simply calling private actors “common carriers” reduces their right to editorial discretion under the First Amendment.

States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).

All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.

There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.

Some NOFO Requirements May Impinge Broadband Deployment

The infrastructure bill’s statutory text declares that:

Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.

In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.

It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.

This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.

Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.

The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.

While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.

But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”

The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”

For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.

The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.

Focusing on the Truly Unserved and Underserved

The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.

Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.

Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:

For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”

In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.

Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.

Conclusion

The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.

Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.

Federal Trade Commission (FTC) Chair Lina Khan recently joined with FTC Commissioner Rebecca Slaughter to file a “written submission on the public interest” in the U.S. International Trade Commission (ITC) Section 337 proceeding concerning imports of certain cellular-telecommunications equipment covered by standard essential patents (SEPs). SEPs are patents that “read on” technology adopted for inclusion in a standard. Regrettably, the commissioners’ filing embodies advice that, if followed, would effectively preclude Section 337 relief to SEP holders. Such a result would substantially reduce the value of U.S. SEPs and thereby discourage investments in standards that help drive American innovation.

Section 337 of the Tariff Act authorizes the ITC to issue “exclusion orders” blocking the importation of products that infringe U.S. patents, subject to certain “public interest” exceptions. Specifically, before issuing an exclusion order, the ITC must consider:

  1. the public health and welfare;
  2. competitive conditions in the U.S. economy;
  3. production of like or directly competitive articles in the United States; and
  4. U.S. consumers.

The Khan-Slaughter filing urges the ITC to consider the impact that issuing an exclusion order against a willing licensee implementing a standard would have on competition and consumers in the United States. The filing concludes that “where a complainant seeks to license and can be made whole through remedies in a different U.S. forum [a federal district court], an exclusion order barring standardized products from the United States will harm consumers and other market participants without providing commensurate benefits.”

Khan and Slaughter’s filing takes a one-dimensional view of the competitive effects of SEP rights. In short, it emphasizes that:

  1. standardization empowers SEP owners to “hold up” licensees by demanding more for a technology than it would have been worth, absent the standard;
  2. “hold ups” lead to higher prices and may discourage standard-setting activities and collaboration, which can delay innovation;
  3. many standard-setting organizations require FRAND (fair, reasonable, and non-discriminatory) licensing commitments from SEP holders to preclude hold-up and encourage standards adoption;
  4. FRAND commitments ensure that SEP licenses will be available at rates limited to the SEP’s “true” value;
  5. the threat of ITC exclusion orders would empower SEP holders to coerce licensees into paying “anticompetitively high” supra-FRAND licensing rates, discouraging investments in standard-compliant products;
  6. inappropriate exclusion orders harm consumers in the short term by depriving them of desired products and, in the longer run, through reduced innovation, competition, quality, and choice;
  7. thus, where the standard implementer is a “willing licensee,” an exclusion order would be contrary to the public interest; and
  8. as a general matter, exclusionary relief is incongruent and against the public interest where a court has been asked to resolve FRAND terms and can make the SEP holder whole.

In essence, Khan and Slaughter recite a parade of theoretical horribles, centered on anticompetitive hold-ups, to call-for denying exclusion orders to SEP owners on public-interest grounds. Their filing’s analysis, however, fails as a matter of empirics, law, and sound economics. 

First, the filing fails to note that there is a lack of empirical support for anticompetitive hold-up being a problem at all (see, for example, here, here, and here). Indeed, a far more serious threat is “hold-out,” whereby the ability of implementers to infringe SEPs without facing serious consequences leads to an inefficient undervaluation of SEP rights (see, for example, here). (At worst, implementers will have to pay at some future time a “reasonable” licensing fee if held to be infringers in court, since U.S. case law (unlike foreign case law) has essentially eliminated SEP holders’ ability to obtain an injunction.)  

Second, as a legal matter, the filing’s logic would undercut the central statutory purpose of Section 337, which is to provide all U.S. patent holders a right to exclude infringing imports. Section 337 does not distinguish between SEPs and other patents—all are entitled to full statutory protection. Former ITC Chair Deanna Tanner Okun, in critiquing a draft administration policy statement that would severely curtail the rights of SEP holders, assessed the denigration of Section 337 statutory protections in a manner that is equally applicable to the Khan-Slaughter filing:

The Draft Policy Statement also circumvents Congress by upending the statutory framework and purpose of Section 337, which includes the ITC’s practice of evaluating all unfair acts equally. Although the draft disclaims any “unique set of legal rules for SEPs,” it does, in fact, create a special and unequal analysis for SEPs. The draft also implies that the ITC should focus on whether the patents asserted are SEPs when judging whether an exclusion order would adversely affect the public interest. The draft fundamentally misunderstands the ITC’s purpose, statutory mandates, and overriding consideration of safeguarding the U.S. public interest and would — again, without statutory approval — elevate SEP status of a single patent over other weighty public interest considerations. The draft also overlooks Presidential review requirements, agency consultation opportunities and the ITC’s ability to issue no remedies at all.

[Notable,] Section 337’s statutory language does not distinguish the types of relief available to patentees when SEPs are asserted.

Third, Khan and Slaughter not only assert theoretical competitive harms from hold-ups that have not been shown to exist (while ignoring the far more real threat of hold-out), they also ignore the foregone dynamic economic gains that would stem from limitations on SEP rights (see, generally, here). Denying SEP holders the right to obtain a Section 337 exclusion order, as advocated by the filing, deprives them of a key property right. It thereby establishes an SEP “liability rule” (SEP holder relegated to seeking damages), as opposed to a “property rule” (SEP holder may seek injunctive relief) as the SEP holder’s sole means to obtain recompense for patent infringement. As my colleague Andrew Mercado and I have explained, a liability-rule approach denies society the substantial economic benefits achievable through an SEP property rule:

[U]nder a property rule, as contrasted to a liability rule, innovation will rise and drive an increase in social surplus, to the benefit of innovators, implementers, and consumers. 

Innovators’ welfare will rise. … First, innovators already in the market will be able to receive higher licensing fees due to their improved negotiating position. Second, new innovators enticed into the market by the “demonstration effect” of incumbent innovators’ success will in turn engage in profitable R&D (to them) that brings forth new cycles of innovation.

Implementers will experience welfare gains as the flood of new innovations enhances their commercial opportunities. New technologies will enable implementers to expand their product offerings and decrease their marginal cost of production. Additionally, new implementers will enter the market as innovation accelerates. Seeing the opportunity to earn high returns, new implementers will be willing to pay innovators a high licensing fee in order to produce novel and improved products.

Finally, consumers will benefit from expanded product offerings and lower quality-adjusted prices. Initial high prices for new goods and services entering the market will fall as companies compete for customers and scale economies are realized. As such, more consumers will have access to new and better products, raising consumers’ surplus.

In conclusion, the ITC should accord zero weight to Khan and Slaughter’s fundamentally flawed filing in determining whether ITC exclusion orders should be available to SEP holders. Denying SEP holders a statutorily provided right to exclude would tend to undermine the value of their property, diminish investment in improved standards, reduce innovation, and ultimately harm consumers—all to the detriment, not the benefit, of the public interest.  

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

[This guest post from Lawrence J. Spiwak of the Phoenix Center for Advanced Legal & Economic Public Policy Studies is the second in our FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

While antitrust and regulation are supposed to be different sides of the same coin, there has always been a healthy debate over which enforcement paradigm is the most efficient. For those who have long suffered under the zealous hand of ex ante regulation, they would gladly prefer to be overseen by the more dispassionate and case-specific oversight of antitrust. Conversely, those dissatisfied with the current state of antitrust enforcement have increased calls to abandon the ex post approach of antitrust and return to some form of active, “always on” regulation.

While the “antitrust versus regulation” debate has raged for some time, the election of President Joe Biden has brought a new wrinkle: Lina Khan, the controversial chair of the Federal Trade Commission (FTC), has made it very clear that she would like to expand the commission’s role from that of a mere enforcer of the nation’s antitrust laws to that of an agency that also promulgates ex ante “bright line” rules. Thus, the “antitrust versus regulation” debate is no longer academic.

Khan’s efforts to convert the FTC into a de facto regulator should surprise no one, however. Even before she was nominated, Khan was quite vocal about her policy vision for the FTC. For example, in 2020, she co-authored an essay with her former boss (and later briefly her FTC colleague) Rohit Chopra in the University of Chicago Law Review titled “The Case for ‘Unfair Methods of Competition’ Rulemaking.” In it, Khan and Chopra lay out both legal and policy arguments to support “unfair methods of competition” (UMC) rulemaking. But as I explain in a law review published last year in the Federalist Society Review titled “A Change in Direction for the Federal Trade Commission?”, Khan and Chopra’s arguments simply do not hold up to scrutiny. While I encourage those interested in the bounds of the FTC’s UMC rulemaking authority to read my paper in full, for purposes of this symposium, I include a brief summary of my analysis below.

At the outset of their essay, Chopra and Khan lay out what they believe to be the shortcomings of modern antitrust enforcement. As they correctly note, “[a]ntitrust law today is developed exclusively through adjudication,” which is designed to “facilitate[] nuanced and fact-specific analysis of liability and well-tailored remedies.” However, the authors contend that, while a case-by-case approach may sound great in theory, “in practice, the reliance on case-by-case adjudication yields a system of enforcement that generates ambiguity, unduly drains resources from enforcers, and deprives individuals and firms of any real opportunity to democratically participate in the process.” Chopra and Khan blame this alleged policy failure on the abandonment of per se rules in favor of the use of the “rule-of-reason” approach in antitrust jurisprudence. In their view, a rule-of-reason approach is nothing more than “a broad and open-ended inquiry into the overall competitive effects of particular conduct [which] asks judges to weigh the circumstances to decide whether the practice at issue violates the antitrust laws.” To remedy this perceived analytical shortcoming, they argue that the commission should step into the breach and promulgate ex ante bright-line rules to better enforce the prohibition against “unfair methods of competition” (UMC) outlined in Section 5 of the Federal Trade Commission Act.

As a threshold matter, while courts have traditionally provided guidance as to what exactly constitutes “unfair methods of competition,” Chopra and Khan argue that it should be the FTC that has that responsibility in the first instance. According to Chopra and Khan, because Congress set up the FTC as the independent expert agency to implement the FTC Act and because the phrase “unfair methods of competition” is ambiguous, courts must accord great deference to “FTC interpretations of ‘unfair methods of competition’” under the Supreme Court’s Chevron doctrine.

The authors then argue that the FTC has statutory authority to promulgate substantive rules to enforce the FTC’s interpretation of UMC. In particular, they point to the broad catch-all provision in Section 6(g) of the FTC Act. Section 6(g) provides, in relevant part, that the FTC may “[f]rom time to time . . . make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Although this catch-all rulemaking provision is far from the detailed statutory scheme Congress set forth in the Magnuson-Moss Act to govern rulemaking to deal with Section 5’s other prohibition against “unfair or deceptive acts and practices” (UDAP), Chopra and Khan argue that the D.C. Circuit’s 1973 ruling in National Petroleum Refiners Association v. FTC—a case that predates the Magnuson-Moss Act—provides judicial affirmation that the FTC has the authority to “promulgate substantive rules, not just procedural rules” under Section 6(g). Stating Khan’s argument differently: although there may be no affirmative specific grant of authority for the FTC to engage in UMC rulemaking, in the absence of any limit on such authority, the FTC may engage in UMC rulemaking subject to the constraints of the Administrative Procedure Act.

As I point out in my paper, while there are certainly strong arguments that the FTC lacks UMC rulemaking authority (see, e.g., Ohlhausen & Rill, “Pushing the Limits? A Primer on FTC Competition Rulemaking”), it is my opinion that, given the current state of administrative law—in particular, the high level of judicial deference accorded to agencies under both Chevron and the “arbitrary and capricious standard”—whether the FTC can engage in UMC rulemaking remains a very open question.

That said, even if we assume arguendo that the FTC does, in fact, have UMC rulemaking authority, the case law nonetheless reveals that, despite Khan’s hopes and desires, the FTC cannot unilaterally abandon the consumer welfare standard. As I explain in detail in my paper, even with great judicial deference, it is well-established that independent agencies simply cannot ignore antitrust terms of art (especially when that agency is specifically charged with enforcing the antitrust laws).  Thus, Khan may get away with initiating UMC rulemaking, but, for example, attempting to impose a mandatory common carrier-style non-discrimination rule may be a bridge too far.

Khan’s Policy Arguments in Favor of UMC Rulemaking

Separate from the legal debate over whether the FTC can engage in UMC rulemaking, it is also important to ask whether the FTC should engage in UMC rulemaking. Khan essentially posits that the American economy needs a generic business regulator possessed with plenary power and expansive jurisdiction. Given the United States’ well-documented (and sordid) experience with public-utility regulation, that’s probably not a good idea.

Indeed, to Khan and Chopra, ex ante regulation is superior to ex post antitrust enforcement. For example, they submit that UMC “rulemaking would enable the Commission to issue clear rules to give market participants sufficient notice about what the law is, helping ensure that enforcement is predictable.” Moreover, they argue that “establishing rules could help relieve antitrust enforcement of steep costs and prolonged trials.” In particular, “[t]argeting conduct through rulemaking, rather than adjudication, would likely lessen the burden of expert fees or protracted litigation, potentially saving significant resources on a present-value basis.” And third, they contend that rulemaking “would enable the Commission to establish rules through a transparent and participatory process, ensuring that everyone who may be affected by a new rule has the opportunity to weigh in on it, granting the rule greater legitimacy.”   

Khan’s published writings argue forcefully for greater regulatory power, but they suffer from analytical omissions that render her judgment questionable. For example, it is axiomatic that, while it is easy to imagine or theorize about the many benefits of regulation, regulation imposes significant costs of both the intended and unintended sorts. These costs can include compliance costs, reductions of innovation and investment, and outright entry deterrence that protects incumbents. Yet nowhere in her co-authored essay does Khan contemplate a cost-benefit analysis before promulgating a new regulation; she appears to assume that regulation is always costless, easy, and beneficial, on net. Unfortunately, history shows that we cannot always count on FTC commissioners to engage in wise policymaking.

Khan also fails to contemplate the possibility that changing market circumstances or inartful drafting might call for the removal of regulations previously imposed. Among other things, this failure calls into question her rationale that “clear rules” would make “enforcement … predictable.” Why, then, does the government not always use clear rules, instead of the ham-handed approach typical of regulatory interventions? More importantly, enforcement of rules requires adjudication on a case-by-case basis that is governed by precedent from prior applications of the rule and due process.

Taken together, Khan’s analytical omissions reveal a lack of historical awareness about (and apparently any personal experience with) the realities of modern public-utility regulation. Indeed, Khan offers up as an example of purported rulemaking success the Federal Communications Commission’s 2015 Open Internet Order, which imposed legacy common-carrier regulations designed for the old Ma Bell monopoly on the internet. But as I detail extensively in my paper, the history of net-neutrality regulation bears witness that Khan’s assertions that this process provided “clear rules,” was faster and cheaper, and allowed for meaningful public participation simply are not true.

President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.

Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.

Life Sciences: USTR Campaigns Against Intellectual-Property Rights

In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.

Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future. 

Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.  

This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.

Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition

In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.

It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.

One of Illumina’s few competitors in the global market is the BGI Group, a China-based company that, in 2013, acquired Complete Genomics, a U.S. target that Illumina pursued but relinquished due to anticipated resistance from the FTC in the merger-review process.  The transaction was then cleared by the Committee on Foreign Investment in the United States (CFIUS).

The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention. 

Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible. 

If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively. 

Wireless Communications: DOJ Takes Aim at Standard-Essential Patents

Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries.  It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.

In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.

The inability to block infringers disrupts this equilibrium by signaling to potential licensees that wireless technologies developed by others can be used at-will, with the terms of use to be negotiated through costly and protracted litigation. A no-injunction rule would discourage innovation while encouraging delaying tactics favored by well-resourced device manufacturers (including some of the world’s largest companies by market capitalization) that occupy bottleneck pathways to lucrative retail markets in the United States, China, and elsewhere.

Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.

Conclusion

From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.

This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services. 

History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it. 

Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:

Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.

The Perennial Threat of Price Controls

Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation. 

Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.” 

But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)). 

This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:

The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)

For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation

Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:

President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)

Senate Majority Leader Chuck Schumer (D-N.Y.) similarly suggested in a 2018 speech that broadband affordability should be ensured: 

[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.

And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).

Rate Regulation by Any Other Name

Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes: 

The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.

The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.

What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”

States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:

In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.

The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:

[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication

The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:

Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

The Economics of Price Controls

Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”

The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.

Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:

When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)

Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:

The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)

Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.

None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment). 

Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”: 

  1. consumer access to lawful content, applications, and services; or
  2. content providers’ ability to distribute lawful content, applications or services.

The FCC thus signaled quite clearly that it would deem many zero-rating arrangements as manifestly “unreasonable.” Yet, for mobile customers who want to consume only a limited amount of data, zero rating of popular apps or other data uses is, in most cases, a net benefit for consumer welfare

These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….

…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.

Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.

Conclusion

Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation. 

While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.

Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.

Activists who railed against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA) a decade ago today celebrate the 10th anniversary of their day of protest, which they credit with sending the bills down to defeat.

Much of the anti-SOPA/PIPA campaign was based on a gauzy notion of “realizing [the] democratizing potential” of the Internet. Which is fine, until it isn’t.

But despite the activists’ temporary legislative victory, the methods of combating digital piracy that SOPA/PIPA contemplated have been employed successfully around the world. It may, indeed, be time for the United States to revisit that approach, as the very real problems the legislation sought to combat haven’t gone away.

From the perspective of rightsholders, the bill’s most important feature was also its most contentious: the ability to enforce judicial “site-blocking orders.” A site-blocking order is a type of remedy sometimes referred to as a no-fault injunction. Under SOPA/PIPA, a court would have been permitted to issue orders that could be used to force a range of firms—from financial providers to ISPs—to cease doing business with or suspend the service of a website that hosted infringing content.

Under current U.S. law, even when a court finds that a site has willfully engaged in infringement, stopping the infringement can be difficult, especially when the parties and their facilities are located outside the country. While Section 512 of the Digital Millennium Copyright Act does allow courts to issue injunctions, there is ambiguity as to whether it allows courts to issue injunctions that obligate online service providers (“OSP”) not directly party to a case to remove infringing material.

Section 512(j), for instance, provides for issuing injunctions “against a service provider that is not subject to monetary remedies under this section.” The “not subject to monetary remedies under this section” language could be construed to mean that such injunctions may be obtained even against OSPs that have not been found at fault for the underlying infringement. But as Motion Picture Association President Stanford K. McCoy testified in 2020:

In more than twenty years … these provisions of the DMCA have never been deployed, presumably because of uncertainty about whether it is necessary to find fault against the service provider before an injunction could issue, unlike the clear no-fault injunctive remedies available in other countries.

But while no-fault injunctions for copyright infringement have not materialized in the United States, this remedy has been used widely around the world. In fact, more than 40 countries—including Denmark, Finland, France, India, England, and Wales—have enacted or are under some obligation to enact rules allowing for no-fault injunctions that direct ISPs to disable access to websites that predominantly promote copyright infringement. 

In short, precisely the approach to controlling piracy that SOPA/PIPA envisioned has been in force around the world over the last decade. This demonstrates that, if properly tailored, no-fault injunctions are an ideal tool for courts to use in the fight to combat piracy.

If anything, we should be using the anniversary of SOPA/PIPA as an opportunity to reflect on a missed opportunity. Congress should take this opportunity to amend Section 512 to grant U.S. courts authority to issue no-fault injunctions that require OSPs to block access to sites that willfully engage in mass infringement.