Archives For FCC

What should a government do when it owns geese that lay golden eggs? Should it sell the geese to fund government programs? Or should it let them run wild so everyone can have a chance at a golden egg? 

That’s the question facing Congress as it considers re-authorizing the Federal Communications Commission’s (FCC’s) authority to auction and license spectrum. Should the FCC auction spectrum to maximize government revenue? Or, should it allow large portions to remain unlicensed to foster innovation and development?

The complication in this regard is that auction revenues play an outsized role in federal lawmakers’ deliberations about spectrum policy. Indeed, spectrum auctions have been wildly successful in generating revenue for the federal government. But the size of direct federal revenues are not necessarily a perfect gauge of the overall social welfare generated by particular policy choices.

As it considers future spe​​ctrum reauthorization, Congress needs to take a balanced approach that includes concern for federal revenues, but also considers the much larger social welfare that is created when diverse users in various situations can access services enabled by both licensed and unlicensed spectrum.

Licenced, Unlicensed, & Shared Spectrum

Most spectrum is licensed by the FCC to certain users. Licensees pay fees to the FCC for the exclusive right to transmit on an assigned frequency within a given geographical area. A license holder has the right to exclude others from accessing the assigned frequency and to be free from harmful interference from other service providers. In the private sector, radio and television broadcasters, as well as mobile-phone services, operate with licensed spectrum. Their right to exclude others and to be free from interference provides improved service and greater reliability in distributing their broadcasts or providing communication services.

SOURCE: U.S. Commerce Department

Licensing gets spectrum into the hands of those who are well-positioned—both technologically and financially—to deploy spectrum for commercial uses. Because a licensee has the right to exclude other operators from the licensed band, licensing offers the operator flexibility to deploy their network in ways that effectively mitigate potential interference. In addition, the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending. Spectrum auctions have reportedly raised more than $230 billion for the U.S. Treasury since their inception.

Unlicensed spectrum can be seen as an open-access resource available to all users without charge. Users are free to use as much of this spectrum as they wish, so long as it’s with FCC-certified equipment operating at authorized power levels. The most well-known example of unlicensed operations is Wi-Fi, a service that operates in the 2.4 GHz, and 5.8 GHz bands and is employed by millions of U.S. users across millions of devices in millions of locations each day. Wi-Fi isn’t the only use for unlicensed spectrum; it covers a range of devices such as those relying on Bluetooth, as well as personal medical devices, appliances, and a wide range of Internet-of-Things devices. 

As with any common resource, each user’s service-quality experience depends on how much spectrum is used by all. In particular, if the demand for spectrum at a particular place and point in time exceeds the available supply, then all users will experience diminished service quality. If you’ve been in a crowded coffee shop and complained that “the Internet sucks here,” it’s more than likely that demand for the shop’s Wi-Fi service is greater than the capacity of the Wi-Fi router.

SOURCE: Wall Street Journal

While there can be issues of interference among wireless devices, it’s not the Wild West. Equipment and software manufacturers have invested in developing technologies that work in noisy environments and in proximity to other products. The existence of sufficient unlicensed and shared spectrum allows for innovation with new technologies and services. Firms don’t have to make large upfront investments in licenses to research, develop, and experiment with their innovations. These innovations benefit consumers, businesses, and manufacturers. According to the Wi-Fi Alliance, the success of Wi-Fi has been enormous:

The United States remains one of the countries with the widest Wi-Fi adoption and use. Cisco estimates 33.5 million paid Wi-Fi access points, with estimates for free public Wi-Fi sites at around 18.6 million. Eighty-five percent of United States broadband subscribers have Wi-Fi capability at home, and mobile users connect to the internet through Wi-Fi over cellular networks more than 55 percent of the time. The United States also has a robust manufacturing ecosystem and increasing enterprise use, which have aided the rise in the value of Wi-Fi. The total economic value of Wi-Fi in 2021 is $995 billion.

The Need for Balanced Spectrum Policy

To be sure, both licensed and unlicensed spectrum play crucial roles and serve different purposes, sometimes as substitutes for one another and sometimes as complements. It can’t therefore be said that one approach is “better” than the other, as there is undeniable economic value to both.

That’s why it’s been said that the optimal amount of unlicensed spectrum is somewhere between 0% and 100%. While that’s true, it’s unhelpful as a guide for policymakers, even if it highlights the challenges they face. Not only must they balance the competing interests of consumers, wireless providers, and electronics manufacturers, but they also have to keep their own self-interest in check, insofar as they are forever tempted to use spectrum auctions to raise revenue.

To this last point, it is likely that the “optimum” amount of unlicensed spectrum for society differs significantly from the amount that maximizes government auction revenues.

For simplicity, let’s assume “consumer welfare” is a shorthand for social welfare less government-auction revenues. In the (purely hypothetical) figure below, consumer welfare is maximized when about 56% of the available spectrum is licensed. Government auction revenues, however, are maximized when all available spectrum is licensed.

SOURCE: Authors

In this example, politicians have a keen interest in licensing more spectrum than is socially optimal. Doing so provides more revenues to the government without raising taxes. The additional costs passed on to individual consumers (or voters) would be so disperse as to be virtually undetectable. It’s a textbook case of concentrated benefits and diffused costs.

Of course, we can debate about the size, shape, and position of each of the curves, as well as where on the curve the United States currently sits. Nevertheless, available evidence indicates that the consumer welfare generated through use of unlicensed broadband will often exceed the revenue generated by spectrum auctions. For example, if the Wi-Fi Alliance’s estimate of $995 billion in economic value for Wi-Fi is accurate (or even in the ballpark), then the value of Wi-Fi alone is more than three times greater than the auction revenues received by the U.S. Treasury.

Of course, licensed-spectrum technology also provides tremendous benefit to society, but the basic basic point cannot be ignored: a congressional calculation that seeks simply to maximize revenue to the U.S. Treasury will almost certainly rob society of a great deal of benefit.

Conclusion

Licensed spectrum is obviously critical, and not just because it allows politicians to raise revenue for the federal government. Cellular technology and other licensed applications are becoming even more important as a wide variety of users opt for cellular-only Internet connections, or where fixed wireless over licensed spectrum is needed to reach remote users.

At the same time, shared and unlicensed spectrum has been a major success story, and promises to keep delivering innovation and greater connectivity in a wide variety of use cases.  As we note above, the federal revenue generated from auctions should not be the only benefit counted. Unlicensed spectrum is responsible for tens of billions of dollars in direct value, and close to $1 trillion when accounting for its indirect benefits.

Ultimately, allocating spectrum needs to be a question of what most enhances consumer welfare. Raising federal revenue is great, but it is only one benefit that must be counted among a number of benefits (and costs). Any simplistic formula that pushes for maximizing a single dimension of welfare is likely to be less than ideal. As Congress considers further spectrum reauthorization, it needs to take seriously the need to encourage both private ownership of licensed spectrum, as well as innovative uses of unlicensed and shared spectrum.

States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).

All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.

There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.

Some NOFO Requirements May Impinge Broadband Deployment

The infrastructure bill’s statutory text declares that:

Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.

In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.

It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.

This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.

Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.

The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.

While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.

But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”

The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”

For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.

The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.

Focusing on the Truly Unserved and Underserved

The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.

Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.

Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:

For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”

In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.

Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.

Conclusion

The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.

Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services. 

History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it. 

Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:

Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.

The Perennial Threat of Price Controls

Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation. 

Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.” 

But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)). 

This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:

The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)

For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation

Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:

President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)

Senate Majority Leader Chuck Schumer (D-N.Y.) similarly suggested in a 2018 speech that broadband affordability should be ensured: 

[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.

And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).

Rate Regulation by Any Other Name

Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes: 

The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.

The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.

What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”

States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:

In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.

The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:

[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication

The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:

Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

The Economics of Price Controls

Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”

The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.

Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:

When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)

Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:

The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)

Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.

None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment). 

Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”: 

  1. consumer access to lawful content, applications, and services; or
  2. content providers’ ability to distribute lawful content, applications or services.

The FCC thus signaled quite clearly that it would deem many zero-rating arrangements as manifestly “unreasonable.” Yet, for mobile customers who want to consume only a limited amount of data, zero rating of popular apps or other data uses is, in most cases, a net benefit for consumer welfare

These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….

…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.

Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.

Conclusion

Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation. 

While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.

Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.

Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure framework, or BIF), which sets aside $65 billion of federal funding for broadband projects. While there is much to praise about the package’s focus on broadband deployment and adoption, whether that money will be well-spent  depends substantially on how the law is implemented and whether the National Telecommunications and Information Administration (NTIA) adopts adequate safeguards to avoid waste, fraud, and abuse. 

The primary aim of the bill’s broadband provisions is to connect the truly unconnected—what the bill refers to as the “unserved” (those lacking a connection of at least 25/3 Mbps) and “underserved” (lacking a connection of at least 100/20 Mbps). In seeking to realize this goal, it’s important to bear in mind that dynamic analysis demonstrates that the broadband market is overwhelmingly healthy, even in locales with relatively few market participants. According to the Federal Communications Commission’s (FCC) latest Broadband Progress Report, approximately 5% of U.S. consumers have no options for at least 25/3 Mbps broadband, and slightly more than 8% have no options for at least 100/10 Mbps).  

Reaching the truly unserved portions of the country will require targeting subsidies toward areas that are currently uneconomic to reach. Without properly targeted subsidies, there is a risk of dampening incentives for private investment and slowing broadband buildout. These tradeoffs must be considered. As we wrote previously in our Broadband Principles issue brief:

  • To move forward successfully on broadband infrastructure spending, Congress must take seriously the roles of both the government and the private sector in reaching the unserved.
  • Current U.S. broadband infrastructure is robust, as demonstrated by the way it met the unprecedented surge in demand for bandwidth during the recent COVID-19 pandemic.
  • To the extent it is necessary at all, public investment in broadband infrastructure should focus on providing Internet access to those who don’t have it, rather than subsidizing competition in areas that already do.
  • Highly prescriptive mandates—like requiring a particular technology or requiring symmetrical speeds— will be costly and likely to skew infrastructure spending away from those in unserved areas.
  • There may be very limited cases where municipal broadband is an effective and efficient solution to a complete absence of broadband infrastructure, but policymakers must narrowly tailor any such proposals to avoid displacing private investment or undermining competition.
  • Consumer-directed subsidies should incentivize broadband buildout and, where necessary, guarantee the availability of minimum levels of service reasonably comparable to those in competitive markets.
  • Firms that take government funding should be subject to reasonable obligations. Competitive markets should be subject to lighter-touch obligations.

The Good

The BIF’s broadband provisions ended up in a largely positive place, at least as written. There are two primary ways it seeks to achieve its goals of promoting adoption and deploying broadband to unserved/underserved areas. First, it makes permanent the Emergency Broadband Benefit program that had been created to provide temporary aid to households who struggled to afford Internet service during the COVID-19 pandemic, though it does lower the monthly user subsidy from $50 to $30. The renamed Affordable Connectivity Program can be used to pay for broadband on its own, or as part of a bundle of other services (e.g., a package that includes telephone, texting, and the rental fee on equipment).

Relatedly, the bill also subsidizes the cost of equipment by extending a one-time reimbursement of up to $100 to broadband providers when a consumer takes advantage of the provider’s discounted sale of connected devices, such as laptops, desktops, or tablet computers capable of Wi-Fi and video conferencing. 

The decision to make the emergency broadband benefit a permanent program broadly comports with recommendations we have made to employ user subsidies (such as connectivity vouchers) to encourage broadband adoption.

The second and arguably more important of the bill’s broadband provisions is its creation of the $42 billion Broadband Equity, Access and Deployment (BEAD) Program. Under the direction of the NTIA, BEAD will direct grants to state governments to help the states expand access to and use of high-speed broadband.  

On the bright side, BEAD does appear to be designed to connect the country’s truly unserved regions—which, as noted above, account for about 8% of the nation’s households. The law explicitly requires prioritizing unserved areas before underserved areas. Even where the text references underserved areas as an additional priority, it does so in a way that won’t necessarily distort private investment.  The bill also creates preferences for projects in persistent and high-poverty areas. Thus, the targeted areas are very likely to fall on the “have-not” side of the digital divide.

On its face, the subsidy and grant approach taken in the bill is, all things considered, commendable. As we note in our broadband report, care must be taken to avoid interventions that distort private investment incentives, particularly in a successful industry like broadband. The goal, after all, is more broadband deployment. If policy interventions only replicate private options (usually at higher cost) or, worse, drive private providers from a market, broadband deployment will be slowed or reversed. The approach taken in this bill attempts to line up private incentives with regulatory goals.

As we discuss below, however, the devil is in the details. In particular, BEAD’s structure could theoretically allow enough discretion in execution that a large amount of waste, fraud, and abuse could end up frustrating the program’s goals.

The Bad

While the bill largely keeps the right focus of building out broadband in unserved areas, there are reasons to question some of its preferences and solutions. For instance, the state subgrant process puts for-profit and government-run broadband solutions on an equal playing field for the purposes of receiving funds, even though the two types of entities exist in very different institutional environments with very different incentives. 

There is also a requirement that projects provide broadband of at least 100/20 Mbps speed, even though the bill defines “unserved”as lacking at least 25/3 Mbps. While this is not terribly objectionable, the preference for 100/20 could have downstream effects on the hardest-to-connect areas. It may only be economically feasible to connect some very remote areas with a 25/3 Mbps connection. Requiring higher speeds in such areas may, despite the best intentions, slow deployment and push providers to prioritize areas that are relatively easier to connect.

For comparison, the FCC’s Connect America Fund and Rural Digital Opportunity Fund programs do place greater weight in bidding for providers that can deploy higher-speed connections. But in areas where a lower speed tier is cost-justified, a provider can still bid and win. This sort of approach would have been preferable in the infrastructure bill. 

But the bill’s largest infirmity is not in its terms or aims, but in the potential for mischief in its implementation. In particular, the BEAD grant program lacks the safeguards that have traditionally been applied to this sort of funding at the FCC. 

Typically, an aid program of this sort would be administered by the FCC under rulemaking bound by the Administrative Procedure Act (APA). As cumbersome as that process may sometimes be, APA rulemaking provides a high degree of transparency that results in fairly reliable public accountability. BEAD, by contrast, eschews this process, and instead permits NTIA to work directly with governors and other relevant state officials to dole out the money.  The funds will almost certainly be distributed more quickly, but with significantly less accountability and oversight. 

A large amount of the implementation detail will be driven at the state level. By definition, this will make it more difficult to monitor how well the program’s aims are being met. It also creates a process with far more opportunities for highly interested parties to lobby state officials to direct funding to their individual pet projects. None of this is to say that BEAD funding will necessarily be misdirected, but NTIA will need to be very careful in how it proceeds.

Conclusion: The Opportunity

Although the BIF’s broadband funds are slated to be distributed next year, we may soon be able to see whether there are warning signs that the legitimate goal of broadband deployment is being derailed for political favoritism. BEAD initially grants a flat $100 million to each state; it is only additional monies over that initial amount that need to be sought through the grant program. Thus, it is highly likely that some states will begin to enact legislation and related regulations in the coming year based on that guaranteed money. This early regulatory and legislative activity could provide insight into the pitfalls the full BEAD grantmaking program will face.

The larger point, however, is that the program needs safeguards. Where Congress declined to adopt them, NTIA would do well to implement them. Obviously, this will be something short of full APA rulemaking, but the NTIA will need to make accountability and reliability a top priority to ensure that the digital divide is substantially closed.

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

It’s a telecom tale as old as time: industry gets a prime slice of radio spectrum and falls in love with it, only to take it for granted. Then, faced with the reapportionment of that spectrum, it proceeds to fight tooth and nail (and law firm) to maintain the status quo. 

In that way, the decision by the Intelligent Transportation Society of America (ITSA) and the American Association of State Highway and Transportation Officials (AASHTO) to seek judicial review of the Federal Communications Commission’s (FCC) order reassigning the 5.9GHz band was right out of central casting. But rather than simply asserting that the FCC’s order was arbitrary, ITSA foreshadowed many of the arguments that it intends to make against the order. 

There are three arguments of note, and should ITSA win on the merits of any of those arguments, it would mark a significant departure from the way spectrum is managed in the United States.

First, ITSA asserts that the U.S. Department of Transportation (DOT), by virtue of its role as the nation’s transportation regulator, retains authority to regulate radio spectrum as it pertains to DOT programs, not the FCC. Of course, this notion is absurd on its face. Congress mandated that the FCC act as the exclusive regulator of non-federal uses of wireless. This leaves the FCC free to—in the words of the Communications Act—“encourage the provision of new technologies and services to the public” and to “provide to all Americans” the best communications networks possible. 

In contrast, other federal agencies with some amount of allocated spectrum each focus exclusively on a particular mission, without regard to the broader concerns of the country (including uses by sister agencies or the states). That’s why, rather than allocate the spectrum directly to DOT, the statute directs the FCC to consider allocating spectrum for Intelligent Transportation Systems and to establish the rules for their spectrum use. The statute directs the FCC to consult with the DOT, but leaves final decisions to the FCC.

Today’s crowded airwaves make it impossible to allocate spectrum for 5G, Wi-Fi 6, and other innovative uses without somehow impacting spectrum used by a federal agency. Accepting the ITSA position would fundamentally alter the FCC’s role relative to other agencies with an interest in the disposition of spectrum, rendering the FCC a vestigial regulatory backwater subject to non-expert veto. As a matter of policy, this would effectively prevent the United States from meeting the growing challenges of our exponentially increasing demand for wireless access. 

It would also put us at a tremendous disadvantage relative to other countries.  International coordination of wireless policy has become critical in the global economy, with our global supply chains and wireless equipment manufacturers dependent on global standards to drive economies of scale and interoperability around the globe. At the last World Radio Conference in 2019, interagency spectrum squabbling significantly undermined the U.S. negotiation efforts. If agencies actually had veto power over the FCC’s spectrum decisions, the United States would have no way to create a coherent negotiating position, let alone to advocate effectively for our national interests.   

Second, though relatedly, ITSA asserts that the FCC’s engineers failed to appropriately evaluate safety impacts and interference concerns. It’s hard to see how this could be the case, given both the massive engineering record and the FCC’s globally recognized expertise in spectrum. As a general rule, the FCC leads the world in spectrum engineering (there is a reason things like mobile service and Wi-Fi started in the United States). No other federal agency (including DOT) has such extensive, varied, and lengthy experience with interference analysis. This allows the FCC to develop broadly applicable standards to protect all emergency communications. Every emergency first responder relies on this expertise every day that they use wireless communications to save lives. Here again, we see the wisdom in Congress delegating to a single expert agency the task of finding the right balance to meet all our wireless public-safety needs.

Third, the petition ambitiously asks the court to set aside all parts of the order, with the exception of the one portion that ITSA likes: freeing the top 30MHz of the band for use by C-V2X on a permanent basis. Given their other arguments, this assertion strains credulity. Either the FCC makes the decisions, or the DOT does. Giving federal agencies veto power over FCC decisions would be bad enough. Allowing litigants to play federal agencies against each other so they can mix and match results would produce chaos and/or paralysis in spectrum policy.

In short, ITSA is asking the court to fundamentally redefine the scope of FCC authority to administer spectrum when other federal agencies are involved; to undermine deference owed to FCC experts; and to do all of this while also holding that the FCC was correct on the one part of the order with which the complainants agree. This would make future progress in wireless technology effectively impossible.

We don’t let individual states decide which side of the road to drive on, or whether red or some other color traffic light means stop, because traffic rules only work when everybody follows the same rules. Wireless policy can only work if one agency makes the rules. Congress says that agency is the FCC. The courts (and other agencies) need to remember that.

Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development. 

Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.

There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move. 

What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets. 

The importance of profit-and-loss to economic calculation

One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace. 

This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs. 

The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.

For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.  

While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs. 

For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit. 

All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.

Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.

Municipal broadband as islands of chaos

Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.

The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.

This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes. 

Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.

But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.

Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate. 

Conclusion

There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here

In the opening seconds of what was surely one of the worst oral arguments in a high-profile case that I have ever heard, Pantelis Michalopoulos, arguing for petitioners against the FCC’s 2018 Restoring Internet Freedom Order (RIFO) expertly captured both why the side he was representing should lose and the overall absurdity of the entire net neutrality debate: “This order is a stab in the heart of the Communications Act. It would literally write ‘telecommunications’ out of the law. It would end the communications agency’s oversight over the main communications service of our time.”

The main communications service of our time is the Internet. The Communications and Telecommunications Acts were written before the advent of the modern Internet, for an era when the telephone was the main communications service of our time. The reality is that technological evolution has written “telecommunications” out of these Acts – the “telecommunications services” they were written to regulate are no longer the important communications services of the day.

The basic question of the net neutrality debate is whether we expect Congress to weigh in on how regulators should respond when an industry undergoes fundamental change, or whether we should instead allow those regulators to redefine the scope of their own authority. In the RIFO case, petitioners (and, more generally, net neutrality proponents) argue that agencies should get to define their own authority. Those on the other side of the issue (including me) argue that that it is up to Congress to provide agencies with guidance in response to changing circumstances – and worry that allowing independent and executive branch agencies broad authority to act without Congressional direction is a recipe for unfettered, unchecked, and fundamentally abusive concentrations of power in the hands of the executive branch.

These arguments were central to the DC Circuit’s evaluation of the prior FCC net neutrality order – the Open Internet Order. But rather than consider the core issue of the case, the four hours of oral arguments this past Friday were instead a relitigation of long-ago addressed ephemeral distinctions, padded out with irrelevance and esoterica, and argued with a passion available only to those who believe in faerie tales and monsters under their bed. Perhaps some revelled in hearing counsel for both sides clumsily fumble through strained explanations of the difference between standalone telecommunications services and information services that are by definition integrated with them, or awkward discussions about how ISPs may implement hypothetical prioritization technologies that have not even been developed. These well worn arguments successfully demonstrated, once again, how many angels can dance upon the head of a single pin – only never before have so many angels been so irrelevant.

This time around, petitioners challenging the order were able to scare up some intervenors to make novel arguments on their behalf. Most notably, they were able to scare up a group of public safety officials to argue that the FCC had failed to consider arguments that the RIFO would jeopardize public safety services that rely on communications networks. I keep using the word “scare” because these arguments are based upon incoherent fears peddled by net neutrality advocates in order to find unsophisticated parties to sign on to their policy adventures. The public safety fears are about as legitimate as concerns that the Easter Bunny might one day win the Preakness – and merited as much response from the FCC as a petition from the Racehorse Association of America demanding the FCC regulate rabbits.

In the end, I have no idea how the DC Circuit is going to come down in this case. Public Safety concerns – like declarations of national emergencies – are often given undue and unwise weight. And there is a legitimately puzzling, if fundamentally academic, argument about a provision of the Communications Act (47 USC 257(c)) that Congress repealed after the Order was adopted and that was an noteworthy part of the notice the FCC gave when the Order was proposed that could lead the Court to remand the Order back to the Commission.

In the end, however, this case is unlikely to address the fundamental question of whether the FCC has any business regulating Internet access services. If the FCC loses, we’ll be back here in another year or two; if the FCC wins, we’ll be back here the next time a Democrat is in the White House. And the real tragedy is that every minute the FCC spends on the interminable net neutrality non-debate is a minute not spent on issues like closing the rural digital divide or promoting competitive entry into markets by next generation services.

So much wasted time. So many billable hours. So many angels dancing on the head of a pin. If only they were the better angels of our nature.


Postscript: If I sound angry about the endless fights over net neutrality, it’s because I am. I live in one of the highest-cost, lowest-connectivity states in the country. A state where much of the territory is covered by small rural carriers for whom the cost of just following these debates can mean delaying the replacement of an old switch, upgrading a circuit to fiber, or wiring a street. A state in which if prioritization were to be deployed it would be so that emergency services would be able to work over older infrastructure or so that someone in a rural community could remotely attend classes at the University or consult with a primary care physician (because forget high speed Internet – we have counties without doctors in them). A state in which if paid prioritization were to be developed it would be to help raise capital to build out service to communities that have never had high-speed Internet access.

So yes: the fact that we might be in for another year of rule making followed by more litigation because some firefighters signed up for the wrong wireless service plan and then were duped into believing a technological, economic, and political absurdity about net neutrality ensuring they get free Internet access does make me angry. Worse, unlike the hypothetical harms net neutrality advocates are worried about, the endless discussion of net neutrality causes real, actual, concrete harm to the people net neutrality advocates like to pat themselves on the back as advocating for. We should all be angry about this, and demanding that Congress put this debate out of our misery.

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.