Archives For Federal Communications Commission

Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.

Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.

The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.

In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Times reports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.

One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:

That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.

It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.

“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”

This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.

Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?

It’s not a simple question to answer.

In comments to the FCC, the International Center for Law & Economics (ICLE) notes:

The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.

This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:

A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.

To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.

What should a government do when it owns geese that lay golden eggs? Should it sell the geese to fund government programs? Or should it let them run wild so everyone can have a chance at a golden egg? 

That’s the question facing Congress as it considers re-authorizing the Federal Communications Commission’s (FCC’s) authority to auction and license spectrum. Should the FCC auction spectrum to maximize government revenue? Or, should it allow large portions to remain unlicensed to foster innovation and development?

The complication in this regard is that auction revenues play an outsized role in federal lawmakers’ deliberations about spectrum policy. Indeed, spectrum auctions have been wildly successful in generating revenue for the federal government. But the size of direct federal revenues are not necessarily a perfect gauge of the overall social welfare generated by particular policy choices.

As it considers future spe​​ctrum reauthorization, Congress needs to take a balanced approach that includes concern for federal revenues, but also considers the much larger social welfare that is created when diverse users in various situations can access services enabled by both licensed and unlicensed spectrum.

Licenced, Unlicensed, & Shared Spectrum

Most spectrum is licensed by the FCC to certain users. Licensees pay fees to the FCC for the exclusive right to transmit on an assigned frequency within a given geographical area. A license holder has the right to exclude others from accessing the assigned frequency and to be free from harmful interference from other service providers. In the private sector, radio and television broadcasters, as well as mobile-phone services, operate with licensed spectrum. Their right to exclude others and to be free from interference provides improved service and greater reliability in distributing their broadcasts or providing communication services.

SOURCE: U.S. Commerce Department

Licensing gets spectrum into the hands of those who are well-positioned—both technologically and financially—to deploy spectrum for commercial uses. Because a licensee has the right to exclude other operators from the licensed band, licensing offers the operator flexibility to deploy their network in ways that effectively mitigate potential interference. In addition, the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending. Spectrum auctions have reportedly raised more than $230 billion for the U.S. Treasury since their inception.

Unlicensed spectrum can be seen as an open-access resource available to all users without charge. Users are free to use as much of this spectrum as they wish, so long as it’s with FCC-certified equipment operating at authorized power levels. The most well-known example of unlicensed operations is Wi-Fi, a service that operates in the 2.4 GHz, and 5.8 GHz bands and is employed by millions of U.S. users across millions of devices in millions of locations each day. Wi-Fi isn’t the only use for unlicensed spectrum; it covers a range of devices such as those relying on Bluetooth, as well as personal medical devices, appliances, and a wide range of Internet-of-Things devices. 

As with any common resource, each user’s service-quality experience depends on how much spectrum is used by all. In particular, if the demand for spectrum at a particular place and point in time exceeds the available supply, then all users will experience diminished service quality. If you’ve been in a crowded coffee shop and complained that “the Internet sucks here,” it’s more than likely that demand for the shop’s Wi-Fi service is greater than the capacity of the Wi-Fi router.

SOURCE: Wall Street Journal

While there can be issues of interference among wireless devices, it’s not the Wild West. Equipment and software manufacturers have invested in developing technologies that work in noisy environments and in proximity to other products. The existence of sufficient unlicensed and shared spectrum allows for innovation with new technologies and services. Firms don’t have to make large upfront investments in licenses to research, develop, and experiment with their innovations. These innovations benefit consumers, businesses, and manufacturers. According to the Wi-Fi Alliance, the success of Wi-Fi has been enormous:

The United States remains one of the countries with the widest Wi-Fi adoption and use. Cisco estimates 33.5 million paid Wi-Fi access points, with estimates for free public Wi-Fi sites at around 18.6 million. Eighty-five percent of United States broadband subscribers have Wi-Fi capability at home, and mobile users connect to the internet through Wi-Fi over cellular networks more than 55 percent of the time. The United States also has a robust manufacturing ecosystem and increasing enterprise use, which have aided the rise in the value of Wi-Fi. The total economic value of Wi-Fi in 2021 is $995 billion.

The Need for Balanced Spectrum Policy

To be sure, both licensed and unlicensed spectrum play crucial roles and serve different purposes, sometimes as substitutes for one another and sometimes as complements. It can’t therefore be said that one approach is “better” than the other, as there is undeniable economic value to both.

That’s why it’s been said that the optimal amount of unlicensed spectrum is somewhere between 0% and 100%. While that’s true, it’s unhelpful as a guide for policymakers, even if it highlights the challenges they face. Not only must they balance the competing interests of consumers, wireless providers, and electronics manufacturers, but they also have to keep their own self-interest in check, insofar as they are forever tempted to use spectrum auctions to raise revenue.

To this last point, it is likely that the “optimum” amount of unlicensed spectrum for society differs significantly from the amount that maximizes government auction revenues.

For simplicity, let’s assume “consumer welfare” is a shorthand for social welfare less government-auction revenues. In the (purely hypothetical) figure below, consumer welfare is maximized when about 56% of the available spectrum is licensed. Government auction revenues, however, are maximized when all available spectrum is licensed.

SOURCE: Authors

In this example, politicians have a keen interest in licensing more spectrum than is socially optimal. Doing so provides more revenues to the government without raising taxes. The additional costs passed on to individual consumers (or voters) would be so disperse as to be virtually undetectable. It’s a textbook case of concentrated benefits and diffused costs.

Of course, we can debate about the size, shape, and position of each of the curves, as well as where on the curve the United States currently sits. Nevertheless, available evidence indicates that the consumer welfare generated through use of unlicensed broadband will often exceed the revenue generated by spectrum auctions. For example, if the Wi-Fi Alliance’s estimate of $995 billion in economic value for Wi-Fi is accurate (or even in the ballpark), then the value of Wi-Fi alone is more than three times greater than the auction revenues received by the U.S. Treasury.

Of course, licensed-spectrum technology also provides tremendous benefit to society, but the basic basic point cannot be ignored: a congressional calculation that seeks simply to maximize revenue to the U.S. Treasury will almost certainly rob society of a great deal of benefit.

Conclusion

Licensed spectrum is obviously critical, and not just because it allows politicians to raise revenue for the federal government. Cellular technology and other licensed applications are becoming even more important as a wide variety of users opt for cellular-only Internet connections, or where fixed wireless over licensed spectrum is needed to reach remote users.

At the same time, shared and unlicensed spectrum has been a major success story, and promises to keep delivering innovation and greater connectivity in a wide variety of use cases.  As we note above, the federal revenue generated from auctions should not be the only benefit counted. Unlicensed spectrum is responsible for tens of billions of dollars in direct value, and close to $1 trillion when accounting for its indirect benefits.

Ultimately, allocating spectrum needs to be a question of what most enhances consumer welfare. Raising federal revenue is great, but it is only one benefit that must be counted among a number of benefits (and costs). Any simplistic formula that pushes for maximizing a single dimension of welfare is likely to be less than ideal. As Congress considers further spectrum reauthorization, it needs to take seriously the need to encourage both private ownership of licensed spectrum, as well as innovative uses of unlicensed and shared spectrum.

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

In an expected decision (but with a somewhat unexpected coalition), the U.S. Supreme Court has moved 5 to 4 to vacate an order issued early last month by the 5th U.S. Circuit Court of Appeals, which stayed an earlier December 2021 order from the U.S. District Court for the Western District of Texas enjoining Texas’ attorney general from enforcing the state’s recently enacted social-media law, H.B. 20. The law would bar social-media platforms with more than 50 million active users from engaging in “censorship” based on political viewpoint. 

The shadow-docket order serves to grant the preliminary injunction sought by NetChoice and the Computer & Communications Industry Association to block the law—which they argue is facially unconstitutional—from taking effect. The trade groups also are challenging a similar Florida law, which the 11th U.S. Circuit Court of Appeals last week ruled was “substantially likely” to violate the First Amendment. Both state laws will thus be stayed while challenges on the merits proceed. 

But the element of the Supreme Court’s order drawing the most initial interest is the “strange bedfellows” breakdown that produced it. Chief Justice John Roberts was joined by conservative Justices Brett Kavanaugh and Amy Coney Barrett and liberals Stephen Breyer and Sonia Sotomayor in moving to vacate the 5th Circuit’s stay. Meanwhile, Justice Samuel Alito wrote a dissent that was joined by fellow conservatives Clarence Thomas and Neil Gorsuch, and liberal Justice Elena Kagan also dissented without offering a written justification.

A glance at the recent history, however, reveals why it should not be all that surprising that the justices would not come down along predictable partisan lines. Indeed, when it comes to content moderation and the question of whether to designate platforms as “common carriers,” the one undeniably predictable outcome is that both liberals and conservatives have been remarkably inconsistent.

Both Sides Flip Flop on Common Carriage

Ever since Justice Thomas used his concurrence in 2021’s Biden v. Knight First Amendment Institute to lay out a blueprint for how states could regulate social-media companies as common carriers, states led by conservatives have been working to pass bills to restrict the ability of social media companies to “censor.” 

Forcing common carriage on the Internet was, not long ago, something conservatives opposed. It was progressives who called net neutrality the “21st Century First Amendment.” The actual First Amendment, however, protects the rights of both Internet service providers (ISPs) and social-media companies to decide the rules of the road on their own platforms.

Back in the heady days of 2014, when the Federal Communications Commission (FCC) was still planning its next moves on net neutrality after losing at the U.S. Court of Appeals for the D.C. Circuit the first time around, Geoffrey Manne and I at the International Center for Law & Economics teamed with Berin Szoka and Tom Struble of TechFreedom to write a piece for the First Amendment Law Review arguing that there was no exception that would render broadband ISPs “state actors” subject to the First Amendment. Further, we argued that the right to editorial discretion meant that net-neutrality regulations would be subject to (and likely fail) First Amendment scrutiny under Tornillo or Turner.

After the FCC moved to reclassify broadband as a Title II common carrier in 2015, then-Judge Kavanaugh of the D.C. Circuit dissented from the denial of en banc review, in part on First Amendment grounds. He argued that “the First Amendment bars the Government from restricting the editorial discretion of Internet service providers, absent a showing that an Internet service provider possesses market power in a relevant geographic market.” In fact, Kavanaugh went so far as to link the interests of ISPs and Big Tech (and even traditional media), stating:

If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

This was not a controversial view among free-market, right-of-center types at the time.

An interesting shift started to occur during the presidency of Donald Trump, however, as tensions between social-media companies and many on the right came to a head. Instead of seeing these companies as private actors with strong First Amendment rights, some conservatives began looking either for ways to apply the First Amendment to them directly as “state actors” or to craft regulations that would essentially make social-media companies into common carriers with regard to speech.

But Kavanaugh’s opinion in USTelecom remains the best way forward to understand how the First Amendment applies online today, whether regarding net neutrality or social-media regulation. Given Justice Alito’s view, expressed in his dissent, that it “is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” it is a fair bet that laws like those passed by Texas and Florida will get a hearing before the Court in the not-distant future. If Justice Kavanaugh’s opinion has sway among the conservative bloc of the Supreme Court, or is able to peel off justices from the liberal bloc, the Texas law and others like it (as well as net-neutrality regulations) will be struck down as First Amendment violations.

Kavanaugh’s USTelecom Dissent

In then-Judge Kavanaugh’s dissent, he highlighted two reasons he believed the FCC’s reclassification of broadband as Title II was unlawful. The first was that the reclassification decision was a “major question” that required clear authority delegated by Congress. The second, more important point was that the FCC’s reclassification decision was subject to the Turner standard. Under that standard, since the FCC did not engage—at the very least—in a market-power analysis, the rules could not stand, as they amounted to mandated speech.

The interesting part of this opinion is that it tracks very closely to the analysis of common-carriage requirements for social-media companies. Kavanaugh’s opinion offered important insights into:

  1. the applicability of the First Amendment right to editorial discretion to common carriers;
  2. the “use it or lose it” nature of this right;
  3. whether Turner’s protections depended on scarcity; and 
  4. what would be required to satisfy Turner scrutiny.

Common Carriage and First Amendment Protection

Kavanaugh found unequivocally that common carriers, such as ISPs classified under Title II, were subject to First Amendment protection under the Turner decisions:

The Court’s ultimate conclusion on that threshold First Amendment point was not obvious beforehand. One could have imagined the Court saying that cable operators merely operate the transmission pipes and are not traditional editors. One could have imagined the Court comparing cable operators to electricity providers, trucking companies, and railroads – all entities subject to traditional economic regulation. But that was not the analytical path charted by the Turner Broadcasting Court. Instead, the Court analogized the cable operators to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment. As Turner Broadcasting concluded, the First Amendment’s basic principles “do not vary when a new and different medium for communication appears” – although there of course can be some differences in how the ultimate First Amendment analysis plays out depending on the nature of (and competition in) a particular communications market. Brown v. Entertainment Merchants Association, 564 U.S. 786, 790 (2011) (internal quotation mark omitted).

Here, of course, we deal with Internet service providers, not cable television operators. But Internet service providers and cable operators perform the same kinds of functions in their respective networks. Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit. Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.

Indeed, some of the same entities that provide cable television service – colloquially known as cable companies – provide Internet access over the very same wires. If those entities receive First Amendment protection when they transmit television stations and networks, they likewise receive First Amendment protection when they transmit Internet content. It would be entirely illogical to conclude otherwise. In short, Internet service providers enjoy First Amendment protection of their rights to speak and exercise editorial discretion, just as cable operators do.

‘Use It or Lose It’ Right to Editorial Discretion

Kavanaugh questioned whether the First Amendment right to editorial discretion depends, to some degree, on how much the entity used the right. Ultimately, he rejected the idea forwarded by the FCC that, since ISPs don’t restrict access to any sites, they were essentially holding themselves out to be common carriers:

I find that argument mystifying. The FCC’s “use it or lose it” theory of First Amendment rights finds no support in the Constitution or precedent. The FCC’s theory is circular, in essence saying: “They have no First Amendment rights because they have not been regularly exercising any First Amendment rights and therefore they have no First Amendment rights.” It may be true that some, many, or even most Internet service providers have chosen not to exercise much editorial discretion, and instead have decided to allow most or all Internet content to be transmitted on an equal basis. But that “carry all comers” decision itself is an exercise of editorial discretion. Moreover, the fact that the Internet service providers have not been aggressively exercising their editorial discretion does not mean that they have no right to exercise their editorial discretion. That would be akin to arguing that people lose the right to vote if they sit out a few elections. Or citizens lose the right to protest if they have not protested before. Or a bookstore loses the right to display its favored books if it has not done so recently. That is not how constitutional rights work. The FCC’s “use it or lose it” theory is wholly foreign to the First Amendment.

Employing a similar logic, Kavanaugh also rejected the notion that net-neutrality rules were essentially voluntary, given that ISPs held themselves out as carrying all content.

Relatedly, the FCC claims that, under the net neutrality rule, an Internet service provider supposedly may opt out of the rule by choosing to carry only some Internet content. But even under the FCC’s description of the rule, an Internet service provider that chooses to carry most or all content still is not allowed to favor some content over other content when it comes to price, speed, and availability. That half-baked regulatory approach is just as foreign to the First Amendment. If a bookstore (or Amazon) decides to carry all books, may the Government then force the bookstore (or Amazon) to feature and promote all books in the same manner? If a newsstand carries all newspapers, may the Government force the newsstand to display all newspapers in the same way? May the Government force the newsstand to price them all equally? Of course not. There is no such theory of the First Amendment. Here, either Internet service providers have a right to exercise editorial discretion, or they do not. If they have a right to exercise editorial discretion, the choice of whether and how to exercise that editorial discretion is up to them, not up to the Government.

Think about what the FCC is saying: Under the rule, you supposedly can exercise your editorial discretion to refuse to carry some Internet content. But if you choose to carry most or all Internet content, you cannot exercise your editorial discretion to favor some content over other content. What First Amendment case or principle supports that theory? Crickets.

In a footnote, Kavanugh continued to lambast the theory of “voluntary regulation” forwarded by the concurrence, stating:

The concurrence in the denial of rehearing en banc seems to suggest that the net neutrality rule is voluntary. According to the concurrence, Internet service providers may comply with the net neutrality rule if they want to comply, but can choose not to comply if they do not want to comply. To the concurring judges, net neutrality merely means “if you say it, do it.”…. If that description were really true, the net neutrality rule would be a simple prohibition against false advertising. But that does not appear to be an accurate description of the rule… It would be strange indeed if all of the controversy were over a “rule” that is in fact entirely voluntary and merely proscribes false advertising. In any event, I tend to doubt that Internet service providers can now simply say that they will choose not to comply with any aspects of the net neutrality rule and be done with it. But if that is what the concurrence means to say, that would of course avoid any First Amendment problem: To state the obvious, a supposed “rule” that actually imposes no mandates or prohibitions and need not be followed would not raise a First Amendment issue.

Scarcity and Capacity to Carry Content

The FCC had also argued that there was a difference between ISPs and the cable companies in Turner in that ISPs did not face decisions about scarcity in content carriage. But Kavanaugh rejected this theory as inconsistent with the First Amendment’s right not to be compelled to carry a message or speech.

That argument, too, makes little sense as a matter of basic First Amendment law. First Amendment protection does not go away simply because you have a large communications platform. A large bookstore has the same right to exercise editorial discretion as a small bookstore. Suppose Amazon has capacity to sell every book currently in publication and therefore does not face the scarcity of space that a bookstore does. Could the Government therefore force Amazon to sell, feature, and promote every book on an equal basis, and prohibit Amazon from promoting or recommending particular books or authors? Of course not. And there is no reason for a different result here. Put simply, the Internet’s technological architecture may mean that Internet service providers can provide unlimited content; it does not mean that they must.

Keep in mind, moreover, why that is so. The First Amendment affords editors and speakers the right not to speak and not to carry or favor unwanted speech of others, at least absent sufficient governmental justification for infringing on that right… That foundational principle packs at least as much punch when you have room on your platform to carry a lot of speakers as it does when you have room on your platform to carry only a few speakers.

Turner Scrutiny and Bottleneck Market Power

Finally, Kavanaugh applied Turner scrutiny and found that, at the very least, it requires a finding of “bottleneck market power” that would allow ISPs to harm consumers. 

At the time of the Turner Broadcasting decisions, cable operators exercised monopoly power in the local cable television markets. That monopoly power afforded cable operators the ability to unfairly disadvantage certain broadcast stations and networks. In the absence of a competitive market, a broadcast station had few places to turn when a cable operator declined to carry it. Without Government intervention, cable operators could have disfavored certain broadcasters and indeed forced some broadcasters out of the market altogether. That would diminish the content available to consumers. The Supreme Court concluded that the cable operators’ market-distorting monopoly power justified Government intervention. Because of the cable operators’ monopoly power, the Court ultimately upheld the must-carry statute…

The problem for the FCC in this case is that here, unlike in Turner Broadcasting, the FCC has not shown that Internet service providers possess market power in a relevant geographic market… 

Rather than addressing any problem of market power, the net neutrality rule instead compels private Internet service providers to supply an open platform for all would-be Internet speakers, and thereby diversify and increase the number of voices available on the Internet. The rule forcibly reduces the relative voices of some Internet service and content providers and enhances the relative voices of other Internet content providers.

But except in rare circumstances, the First Amendment does not allow the Government to regulate the content choices of private editors just so that the Government may enhance certain voices and alter the content available to the citizenry… Turner Broadcasting did not allow the Government to satisfy intermediate scrutiny merely by asserting an interest in diversifying or increasing the number of speakers available on cable systems. After all, if that interest sufficed to uphold must-carry regulation without a showing of market power, the Turner Broadcasting litigation would have unfolded much differently. The Supreme Court would have had little or no need to determine whether the cable operators had market power. But the Supreme Court emphasized and relied on the Government’s market power showing when the Court upheld the must-carry requirements… To be sure, the interests in diversifying and increasing content are important governmental interests in the abstract, according to the Supreme Court But absent some market dysfunction, Government regulation of the content carriage decisions of communications service providers is not essential to furthering those interests, as is required to satisfy intermediate scrutiny.

In other words, without a finding of bottleneck market power, there would be no basis for satisfying the government interest prong of Turner.

Applying Kavanaugh’s Dissent to NetChoice v. Paxton

Interestingly, each of these main points arises in the debate over regulating social-media companies as common carriers. Texas’ H.B. 20 attempts to do exactly that, which is at the heart of the litigation in NetChoice v. Paxton.

Common Carriage and First Amendment Protection

To the first point, Texas attempts to claim in its briefs that social-media companies are common carriers subject to lesser First Amendment protection: “Assuming the platforms’ refusals to serve certain customers implicated First Amendment rights, Texas has properly denominated the platforms common carriers. Imposing common-carriage requirements on a business does not offend the First Amendment.”

But much like the cable operators before them in Turner, social-media companies are not simply carriers of persons or things like the classic examples of railroads, telegraphs, and telephones. As TechFreedom put it in its brief: “As its name suggests… ‘common carriage’ is about offering, to the public at large  and on indiscriminate terms, to carry generic stuff from point A to point B. Social media websites fulfill none of these elements.”

In a sense, it’s even clearer that social-media companies are not common carriers than it was in the case of ISPs, because social-media platforms have always had terms of service that limit what can be said and that even allow the platforms to remove users for violations. All social-media platforms curate content for users in ways that ISPs normally do not.

‘Use It or Lose It’ Right to Editorial Discretion

Just as the FCC did in the Title II context, Texas also presses the idea that social-media companies gave up their right to editorial discretion by disclaiming the choice to exercise it, stating: “While the platforms compare their business policies to classic examples of First Amendment speech, such as a newspaper’s decision to include an article in its pages, the platforms have disclaimed any such status over many years and in countless cases. This Court should not accept the platforms’ good-for-this-case-only characterization of their businesses.” Pointing primarily to cases where social-media companies have invoked Section 230 immunity as a defense, Texas argues they have essentially lost the right to editorial discretion.

This, again, flies in the face of First Amendment jurisprudence, as Kavanaugh earlier explained. Moreover, the idea that social-media companies have disclaimed editorial discretion due to Section 230 is inconsistent with what that law actually does. Section 230 allows social-media companies to engage in as much or as little content moderation as they so choose by holding the third-party speakers accountable rather than the platform. Social-media companies do not relinquish their First Amendment rights to editorial discretion because they assert an applicable defense under the law. Moreover, social-media companies have long had rules delineating permissible speech, and they enforce those rules actively.

Interestingly, there has also been an analogue to the idea forwarded in USTelecom that the law’s First Amendment burdens are relatively limited. As noted above, then-Judge Kavanaugh rejected the idea forwarded by the concurrence that net-neutrality rules were essentially voluntary. In the case of H.B. 20, the bill’s original sponsor recently argued on Twitter that the Texas law essentially incorporates Section 230 by reference. If this is true, then the rules would be as pointless as the net-neutrality rules would have been, because social-media companies would be free under Section 230(c)(2) to remove “otherwise objectionable” material under the Texas law.

Scarcity and Capacity to Carry Content

In an earlier brief to the 5th Circuit, Texas attempted to differentiate social-media companies from the cable company in Turner by stating there was no necessary conflict between speakers, stating “[HB 20] does not, for example, pit one group of speakers against another.” But this is just a different way of saying that, since social-media companies don’t face scarcity in their technical capacity to carry speech, they can be required to carry all speech. This is inconsistent with the right Kavanaugh identified not to carry a message or speech, which is not subject to an exception that depends on the platform’s capacity to carry more speech.

Turner Scrutiny and Bottleneck Market Power

Finally, Judge Kavanaugh’s application of Turner to ISPs makes clear that a showing of bottleneck market power is necessary before common-carriage regulation may be applied to social-media companies. In fact, Kavanaugh used a comparison to social-media sites and broadcasters as a reductio ad absurdum for the idea that one could regulate ISPs without a showing of market power. As he put it there:

Consider the implications if the law were otherwise. If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

Much like the FCC with its Open Internet Order, Texas did not make a finding of bottleneck market power in H.B. 20. Instead, Texas basically asked for the opportunity to get to discovery to develop the case that social-media platforms have market power, stating that “[b]ecause the District Court sharply limited discovery before issuing its preliminary injunction, the parties have not yet had the opportunity to develop many factual questions, including whether the platforms possess market power.” This simply won’t fly under Turner, which required a legislative finding of bottleneck market power that simply doesn’t exist in H.B. 20. 

Moreover, bottleneck market power means more than simply “market power” in an antitrust sense. As Judge Kavanaugh put it: “Turner Broadcasting seems to require even more from the Government. The Government apparently must also show that the market power would actually be used to disadvantage certain content providers, thereby diminishing the diversity and amount of content available.” Here, that would mean not only that social-media companies have market power, but they want to use it to disadvantage users in a way that makes less diverse content and less total content available.

The economics of multi-sided markets is probably the best explanation for why platforms have moderation rules. They are used to maximize a platform’s value by keeping as many users engaged and on those platforms as possible. In other words, the effect of moderation rules is to increase the amount of user speech by limiting harassing content that could repel users. This is a much better explanation for these rules than “anti-conservative bias” or a desire to censor for censorship’s sake (though there may be room for debate on the margin when it comes to the moderation of misinformation and hate speech).

In fact, social-media companies, unlike the cable operators in Turner, do not have the type of “physical connection between the television set and the cable network” that would grant them “bottleneck, or gatekeeper, control over” speech in ways that would allow platforms to “silence the voice of competing speakers with a mere flick of the switch.” Cf. Turner, 512 U.S. at 656. Even if they tried, social-media companies simply couldn’t prevent Internet users from accessing content they wish to see online; they inevitably will find such content by going to a different site or app.

Conclusion: The Future of the First Amendment Online

While many on both sides of the partisan aisle appear to see a stark divide between the interests of—and First Amendment protections afforded to—ISPs and social-media companies, Kavanaugh’s opinion in USTelecom shows clearly they are in the same boat. The two rise or fall together. If the government can impose common-carriage requirements on social-media companies in the name of free speech, then they most assuredly can when it comes to ISPs. If the First Amendment protects the editorial discretion of one, then it does for both.

The question then moves to relative market power, and whether the dominant firms in either sector can truly be said to have “bottleneck” market power, which implies the physical control of infrastructure that social-media companies certainly lack.

While it will be interesting to see what the 5th Circuit (and likely, the Supreme Court) ultimately do when reviewing H.B. 20 and similar laws, if now-Justice Kavanaugh’s dissent is any hint, there will be a strong contingent on the Court for finding the First Amendment applies online by protecting the right of private actors (ISPs and social-media companies) to set the rules of the road on their property. As Kavanaugh put it in Manhattan Community Access Corp. v. Halleck: “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Competition is the best way to protect consumers’ interests, not prophylactic government regulation.

With the 11th Circuit upholding the stay against Florida’s social-media law and the Supreme Court granting the emergency application to vacate the stay of the injunction in NetChoice v. Paxton, the future of the First Amendment appears to be on strong ground. There is no basis to conclude that simply calling private actors “common carriers” reduces their right to editorial discretion under the First Amendment.

States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).

All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.

There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.

Some NOFO Requirements May Impinge Broadband Deployment

The infrastructure bill’s statutory text declares that:

Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.

In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.

It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.

This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.

Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.

The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.

While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.

But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”

The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”

For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.

The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.

Focusing on the Truly Unserved and Underserved

The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.

Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.

Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:

For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”

In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.

Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.

Conclusion

The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.

Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

[This guest post from Lawrence J. Spiwak of the Phoenix Center for Advanced Legal & Economic Public Policy Studies is the second in our FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

While antitrust and regulation are supposed to be different sides of the same coin, there has always been a healthy debate over which enforcement paradigm is the most efficient. For those who have long suffered under the zealous hand of ex ante regulation, they would gladly prefer to be overseen by the more dispassionate and case-specific oversight of antitrust. Conversely, those dissatisfied with the current state of antitrust enforcement have increased calls to abandon the ex post approach of antitrust and return to some form of active, “always on” regulation.

While the “antitrust versus regulation” debate has raged for some time, the election of President Joe Biden has brought a new wrinkle: Lina Khan, the controversial chair of the Federal Trade Commission (FTC), has made it very clear that she would like to expand the commission’s role from that of a mere enforcer of the nation’s antitrust laws to that of an agency that also promulgates ex ante “bright line” rules. Thus, the “antitrust versus regulation” debate is no longer academic.

Khan’s efforts to convert the FTC into a de facto regulator should surprise no one, however. Even before she was nominated, Khan was quite vocal about her policy vision for the FTC. For example, in 2020, she co-authored an essay with her former boss (and later briefly her FTC colleague) Rohit Chopra in the University of Chicago Law Review titled “The Case for ‘Unfair Methods of Competition’ Rulemaking.” In it, Khan and Chopra lay out both legal and policy arguments to support “unfair methods of competition” (UMC) rulemaking. But as I explain in a law review published last year in the Federalist Society Review titled “A Change in Direction for the Federal Trade Commission?”, Khan and Chopra’s arguments simply do not hold up to scrutiny. While I encourage those interested in the bounds of the FTC’s UMC rulemaking authority to read my paper in full, for purposes of this symposium, I include a brief summary of my analysis below.

At the outset of their essay, Chopra and Khan lay out what they believe to be the shortcomings of modern antitrust enforcement. As they correctly note, “[a]ntitrust law today is developed exclusively through adjudication,” which is designed to “facilitate[] nuanced and fact-specific analysis of liability and well-tailored remedies.” However, the authors contend that, while a case-by-case approach may sound great in theory, “in practice, the reliance on case-by-case adjudication yields a system of enforcement that generates ambiguity, unduly drains resources from enforcers, and deprives individuals and firms of any real opportunity to democratically participate in the process.” Chopra and Khan blame this alleged policy failure on the abandonment of per se rules in favor of the use of the “rule-of-reason” approach in antitrust jurisprudence. In their view, a rule-of-reason approach is nothing more than “a broad and open-ended inquiry into the overall competitive effects of particular conduct [which] asks judges to weigh the circumstances to decide whether the practice at issue violates the antitrust laws.” To remedy this perceived analytical shortcoming, they argue that the commission should step into the breach and promulgate ex ante bright-line rules to better enforce the prohibition against “unfair methods of competition” (UMC) outlined in Section 5 of the Federal Trade Commission Act.

As a threshold matter, while courts have traditionally provided guidance as to what exactly constitutes “unfair methods of competition,” Chopra and Khan argue that it should be the FTC that has that responsibility in the first instance. According to Chopra and Khan, because Congress set up the FTC as the independent expert agency to implement the FTC Act and because the phrase “unfair methods of competition” is ambiguous, courts must accord great deference to “FTC interpretations of ‘unfair methods of competition’” under the Supreme Court’s Chevron doctrine.

The authors then argue that the FTC has statutory authority to promulgate substantive rules to enforce the FTC’s interpretation of UMC. In particular, they point to the broad catch-all provision in Section 6(g) of the FTC Act. Section 6(g) provides, in relevant part, that the FTC may “[f]rom time to time . . . make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Although this catch-all rulemaking provision is far from the detailed statutory scheme Congress set forth in the Magnuson-Moss Act to govern rulemaking to deal with Section 5’s other prohibition against “unfair or deceptive acts and practices” (UDAP), Chopra and Khan argue that the D.C. Circuit’s 1973 ruling in National Petroleum Refiners Association v. FTC—a case that predates the Magnuson-Moss Act—provides judicial affirmation that the FTC has the authority to “promulgate substantive rules, not just procedural rules” under Section 6(g). Stating Khan’s argument differently: although there may be no affirmative specific grant of authority for the FTC to engage in UMC rulemaking, in the absence of any limit on such authority, the FTC may engage in UMC rulemaking subject to the constraints of the Administrative Procedure Act.

As I point out in my paper, while there are certainly strong arguments that the FTC lacks UMC rulemaking authority (see, e.g., Ohlhausen & Rill, “Pushing the Limits? A Primer on FTC Competition Rulemaking”), it is my opinion that, given the current state of administrative law—in particular, the high level of judicial deference accorded to agencies under both Chevron and the “arbitrary and capricious standard”—whether the FTC can engage in UMC rulemaking remains a very open question.

That said, even if we assume arguendo that the FTC does, in fact, have UMC rulemaking authority, the case law nonetheless reveals that, despite Khan’s hopes and desires, the FTC cannot unilaterally abandon the consumer welfare standard. As I explain in detail in my paper, even with great judicial deference, it is well-established that independent agencies simply cannot ignore antitrust terms of art (especially when that agency is specifically charged with enforcing the antitrust laws).  Thus, Khan may get away with initiating UMC rulemaking, but, for example, attempting to impose a mandatory common carrier-style non-discrimination rule may be a bridge too far.

Khan’s Policy Arguments in Favor of UMC Rulemaking

Separate from the legal debate over whether the FTC can engage in UMC rulemaking, it is also important to ask whether the FTC should engage in UMC rulemaking. Khan essentially posits that the American economy needs a generic business regulator possessed with plenary power and expansive jurisdiction. Given the United States’ well-documented (and sordid) experience with public-utility regulation, that’s probably not a good idea.

Indeed, to Khan and Chopra, ex ante regulation is superior to ex post antitrust enforcement. For example, they submit that UMC “rulemaking would enable the Commission to issue clear rules to give market participants sufficient notice about what the law is, helping ensure that enforcement is predictable.” Moreover, they argue that “establishing rules could help relieve antitrust enforcement of steep costs and prolonged trials.” In particular, “[t]argeting conduct through rulemaking, rather than adjudication, would likely lessen the burden of expert fees or protracted litigation, potentially saving significant resources on a present-value basis.” And third, they contend that rulemaking “would enable the Commission to establish rules through a transparent and participatory process, ensuring that everyone who may be affected by a new rule has the opportunity to weigh in on it, granting the rule greater legitimacy.”   

Khan’s published writings argue forcefully for greater regulatory power, but they suffer from analytical omissions that render her judgment questionable. For example, it is axiomatic that, while it is easy to imagine or theorize about the many benefits of regulation, regulation imposes significant costs of both the intended and unintended sorts. These costs can include compliance costs, reductions of innovation and investment, and outright entry deterrence that protects incumbents. Yet nowhere in her co-authored essay does Khan contemplate a cost-benefit analysis before promulgating a new regulation; she appears to assume that regulation is always costless, easy, and beneficial, on net. Unfortunately, history shows that we cannot always count on FTC commissioners to engage in wise policymaking.

Khan also fails to contemplate the possibility that changing market circumstances or inartful drafting might call for the removal of regulations previously imposed. Among other things, this failure calls into question her rationale that “clear rules” would make “enforcement … predictable.” Why, then, does the government not always use clear rules, instead of the ham-handed approach typical of regulatory interventions? More importantly, enforcement of rules requires adjudication on a case-by-case basis that is governed by precedent from prior applications of the rule and due process.

Taken together, Khan’s analytical omissions reveal a lack of historical awareness about (and apparently any personal experience with) the realities of modern public-utility regulation. Indeed, Khan offers up as an example of purported rulemaking success the Federal Communications Commission’s 2015 Open Internet Order, which imposed legacy common-carrier regulations designed for the old Ma Bell monopoly on the internet. But as I detail extensively in my paper, the history of net-neutrality regulation bears witness that Khan’s assertions that this process provided “clear rules,” was faster and cheaper, and allowed for meaningful public participation simply are not true.

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services. 

History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it. 

Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:

Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.

The Perennial Threat of Price Controls

Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation. 

Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.” 

But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)). 

This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:

The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)

For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation

Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:

President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)

Senate Majority Leader Chuck Schumer (D-N.Y.) similarly suggested in a 2018 speech that broadband affordability should be ensured: 

[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.

And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).

Rate Regulation by Any Other Name

Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes: 

The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.

The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.

What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”

States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:

In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.

The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:

[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication

The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:

Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

The Economics of Price Controls

Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”

The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.

Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:

When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)

Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:

The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)

Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.

None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment). 

Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”: 

  1. consumer access to lawful content, applications, and services; or
  2. content providers’ ability to distribute lawful content, applications or services.

The FCC thus signaled quite clearly that it would deem many zero-rating arrangements as manifestly “unreasonable.” Yet, for mobile customers who want to consume only a limited amount of data, zero rating of popular apps or other data uses is, in most cases, a net benefit for consumer welfare

These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….

…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.

Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.

Conclusion

Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation. 

While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.

Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.

Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure framework, or BIF), which sets aside $65 billion of federal funding for broadband projects. While there is much to praise about the package’s focus on broadband deployment and adoption, whether that money will be well-spent  depends substantially on how the law is implemented and whether the National Telecommunications and Information Administration (NTIA) adopts adequate safeguards to avoid waste, fraud, and abuse. 

The primary aim of the bill’s broadband provisions is to connect the truly unconnected—what the bill refers to as the “unserved” (those lacking a connection of at least 25/3 Mbps) and “underserved” (lacking a connection of at least 100/20 Mbps). In seeking to realize this goal, it’s important to bear in mind that dynamic analysis demonstrates that the broadband market is overwhelmingly healthy, even in locales with relatively few market participants. According to the Federal Communications Commission’s (FCC) latest Broadband Progress Report, approximately 5% of U.S. consumers have no options for at least 25/3 Mbps broadband, and slightly more than 8% have no options for at least 100/10 Mbps).  

Reaching the truly unserved portions of the country will require targeting subsidies toward areas that are currently uneconomic to reach. Without properly targeted subsidies, there is a risk of dampening incentives for private investment and slowing broadband buildout. These tradeoffs must be considered. As we wrote previously in our Broadband Principles issue brief:

  • To move forward successfully on broadband infrastructure spending, Congress must take seriously the roles of both the government and the private sector in reaching the unserved.
  • Current U.S. broadband infrastructure is robust, as demonstrated by the way it met the unprecedented surge in demand for bandwidth during the recent COVID-19 pandemic.
  • To the extent it is necessary at all, public investment in broadband infrastructure should focus on providing Internet access to those who don’t have it, rather than subsidizing competition in areas that already do.
  • Highly prescriptive mandates—like requiring a particular technology or requiring symmetrical speeds— will be costly and likely to skew infrastructure spending away from those in unserved areas.
  • There may be very limited cases where municipal broadband is an effective and efficient solution to a complete absence of broadband infrastructure, but policymakers must narrowly tailor any such proposals to avoid displacing private investment or undermining competition.
  • Consumer-directed subsidies should incentivize broadband buildout and, where necessary, guarantee the availability of minimum levels of service reasonably comparable to those in competitive markets.
  • Firms that take government funding should be subject to reasonable obligations. Competitive markets should be subject to lighter-touch obligations.

The Good

The BIF’s broadband provisions ended up in a largely positive place, at least as written. There are two primary ways it seeks to achieve its goals of promoting adoption and deploying broadband to unserved/underserved areas. First, it makes permanent the Emergency Broadband Benefit program that had been created to provide temporary aid to households who struggled to afford Internet service during the COVID-19 pandemic, though it does lower the monthly user subsidy from $50 to $30. The renamed Affordable Connectivity Program can be used to pay for broadband on its own, or as part of a bundle of other services (e.g., a package that includes telephone, texting, and the rental fee on equipment).

Relatedly, the bill also subsidizes the cost of equipment by extending a one-time reimbursement of up to $100 to broadband providers when a consumer takes advantage of the provider’s discounted sale of connected devices, such as laptops, desktops, or tablet computers capable of Wi-Fi and video conferencing. 

The decision to make the emergency broadband benefit a permanent program broadly comports with recommendations we have made to employ user subsidies (such as connectivity vouchers) to encourage broadband adoption.

The second and arguably more important of the bill’s broadband provisions is its creation of the $42 billion Broadband Equity, Access and Deployment (BEAD) Program. Under the direction of the NTIA, BEAD will direct grants to state governments to help the states expand access to and use of high-speed broadband.  

On the bright side, BEAD does appear to be designed to connect the country’s truly unserved regions—which, as noted above, account for about 8% of the nation’s households. The law explicitly requires prioritizing unserved areas before underserved areas. Even where the text references underserved areas as an additional priority, it does so in a way that won’t necessarily distort private investment.  The bill also creates preferences for projects in persistent and high-poverty areas. Thus, the targeted areas are very likely to fall on the “have-not” side of the digital divide.

On its face, the subsidy and grant approach taken in the bill is, all things considered, commendable. As we note in our broadband report, care must be taken to avoid interventions that distort private investment incentives, particularly in a successful industry like broadband. The goal, after all, is more broadband deployment. If policy interventions only replicate private options (usually at higher cost) or, worse, drive private providers from a market, broadband deployment will be slowed or reversed. The approach taken in this bill attempts to line up private incentives with regulatory goals.

As we discuss below, however, the devil is in the details. In particular, BEAD’s structure could theoretically allow enough discretion in execution that a large amount of waste, fraud, and abuse could end up frustrating the program’s goals.

The Bad

While the bill largely keeps the right focus of building out broadband in unserved areas, there are reasons to question some of its preferences and solutions. For instance, the state subgrant process puts for-profit and government-run broadband solutions on an equal playing field for the purposes of receiving funds, even though the two types of entities exist in very different institutional environments with very different incentives. 

There is also a requirement that projects provide broadband of at least 100/20 Mbps speed, even though the bill defines “unserved”as lacking at least 25/3 Mbps. While this is not terribly objectionable, the preference for 100/20 could have downstream effects on the hardest-to-connect areas. It may only be economically feasible to connect some very remote areas with a 25/3 Mbps connection. Requiring higher speeds in such areas may, despite the best intentions, slow deployment and push providers to prioritize areas that are relatively easier to connect.

For comparison, the FCC’s Connect America Fund and Rural Digital Opportunity Fund programs do place greater weight in bidding for providers that can deploy higher-speed connections. But in areas where a lower speed tier is cost-justified, a provider can still bid and win. This sort of approach would have been preferable in the infrastructure bill. 

But the bill’s largest infirmity is not in its terms or aims, but in the potential for mischief in its implementation. In particular, the BEAD grant program lacks the safeguards that have traditionally been applied to this sort of funding at the FCC. 

Typically, an aid program of this sort would be administered by the FCC under rulemaking bound by the Administrative Procedure Act (APA). As cumbersome as that process may sometimes be, APA rulemaking provides a high degree of transparency that results in fairly reliable public accountability. BEAD, by contrast, eschews this process, and instead permits NTIA to work directly with governors and other relevant state officials to dole out the money.  The funds will almost certainly be distributed more quickly, but with significantly less accountability and oversight. 

A large amount of the implementation detail will be driven at the state level. By definition, this will make it more difficult to monitor how well the program’s aims are being met. It also creates a process with far more opportunities for highly interested parties to lobby state officials to direct funding to their individual pet projects. None of this is to say that BEAD funding will necessarily be misdirected, but NTIA will need to be very careful in how it proceeds.

Conclusion: The Opportunity

Although the BIF’s broadband funds are slated to be distributed next year, we may soon be able to see whether there are warning signs that the legitimate goal of broadband deployment is being derailed for political favoritism. BEAD initially grants a flat $100 million to each state; it is only additional monies over that initial amount that need to be sought through the grant program. Thus, it is highly likely that some states will begin to enact legislation and related regulations in the coming year based on that guaranteed money. This early regulatory and legislative activity could provide insight into the pitfalls the full BEAD grantmaking program will face.

The larger point, however, is that the program needs safeguards. Where Congress declined to adopt them, NTIA would do well to implement them. Obviously, this will be something short of full APA rulemaking, but the NTIA will need to make accountability and reliability a top priority to ensure that the digital divide is substantially closed.

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

PHOTO: C-Span

Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints. 

Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.

This is a critical time for the FTC. It has lost a number of high-profile lawsuits and is preparing to expand its rulemaking powers to regulate things like employment contracts and businesses’ use of data. Khan has also argued in favor of additional rulemaking powers around “unfair methods of competition.”

As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee. 

The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined. 

In fact, the bill’s scope is so broad that some have argued that the FTC simply would not challenge “innocuous self-preferencing” like, say, Apple pre-installing Apple Music on iPhones. Economist Hal Singer has defended the proposals on the grounds that, “Due to limited resources, not all platform integration will be challenged.” 

But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively. 

Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.

The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.

This approach is common in countries with weak rule of law. Anti-corruption laws are frequently used to punish opponents of the regime in China, who probably are also corrupt, but are prosecuted because they have challenged the regime in some way. Hong Kong’s National Security law has also been used to target peaceful protestors and critical media thanks to its vague and overly broad drafting. 

Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’

The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC. 

This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.

Khan, as a critic of the Consumer Welfare Standard, believes that antitrust ought to be used to pursue non-economic objectives, including “the dispersion of political and economic control.” She, and the FTC under her, may wish to use this discretionary power to prosecute firms that she feels are hurting society for unrelated reasons, such as because of political stances they have (or have not) taken.

Khan’s fellow commissioner, Rebecca Kelly Slaughter, has argued that antitrust should be “antiracist”; that “as long as Black-owned businesses and Black consumers are systematically underrepresented and disadvantaged, we know our markets are not fair”; and that the FTC should consider using its existing rulemaking powers to address racist practices. These may be desirable goals, but their application would require contentious value judgements that lawmakers may not want the FTC to make.

Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:

“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]

Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.

It is concerning enough that this legislative package would prohibit conduct that is good for consumers, and that actually increases the competition faced by Big Tech firms. Congress should understand that it also gives extensive discretionary powers to an agency intent on using them to pursue broad, political goals. If Khan’s appointment as chair was a surprise, what her FTC does with the new powers given to her by Congress should not be.