Archives For Broadband

As the U.S. House Energy and Commerce Subcommittee on Oversight and Investigations convenes this morning for a hearing on overseeing federal funds for broadband deployment, it bears mention that one of the largest U.S. broadband-subsidy programs is actually likely run out of money within the next year. Writing in Forbes, Roslyn Layton observes of the Affordable Connectivity Program (ACP) that it has enrolled more than 14 million households, concluding that it “may be the most effective broadband benefit program to date with its direct to consumer model.”

This may be true, but how should we measure effectiveness? One seemingly simple measure would be the number of households with at-home internet access who would not have it but for the ACP’s subsidies. Those households can be broadly divided into two groups:

  1. Households that signed up for ACP and got at-home internet; and
  2. Households that have at-home internet, but wouldn’t if they didn’t receive the ACP subsidies.

Conceptually, evaluating the first group is straightforward. We can survey ACP subscribers and determine whether they had internet access before receiving the ACP subsidies. The second group is much more difficult, if not impossible, to measure with the available information. We can only guess as to how many households would unsubscribe if the subsidies went away.

To give a bit of background on the program we now call the ACP: broadband has been included since 2016 as a supported service under the Federal Communication Commission’s (FCC) Lifeline program. Among the Lifeline program’s goals are to ensure the availability of broadband for low-income households (to close the so-called “digital divide”) and to minimize the Universal Service Fund contribution burden levied on consumers and businesses.

As part of the appropriations act enacted in 2021 in response to the COVID-19 pandemic, Congress created a temporary $3.2 billion Emergency Broadband Benefit (EBB) program within the Lifeline program. EBB provided eligible households with a $50 monthly discount on qualifying broadband service or bundled voice-broadband packages purchased from participating providers, as well as a one-time discount of up to $100 for the purchase of a device (computer or tablet). The EBB program was originally set to expire when the funds were depleted, or six months after the U.S. Department of Health and Human Services (HHS) declared an end to the pandemic.

With passage of the Infrastructure Investment and Jobs Act (IIJA) in November 2021, the EBB’s temporary subsidy was extended indefinitely and renamed the Affordable Connectivity Program, or ACP. The IIJA allocated an additional $14 billion to provide subsidies of $30 a month to eligible households. Without additional appropriations, the ACP is expected to run out of funding by early 2024.

The Case of the Nonadopters

According to Information Technology & Innovation Foundation (ITIF), 97.6% of the U.S. population has access to a fixed connection of at least 25/3 Mbps through asymmetric digital subscriber line (ADSL), cable, fiber, or fixed wireless. Pew Research reports that 93% of its survey respondents indicated they have a broadband connection at home.

Pew’s results are in-line with U.S. Census estimates from the American Community Survey. The figure below, summarizing information from 2021, shows that 92.6% of households had a broadband subscription or had access without having to pay for a subscription. Assuming ITIF’s estimates of broadband availability are accurate, then among households without broadband, approximately two-thirds of them—6.4 million—have access to broadband.

On the one hand, price is obviously a major factor driving adoption. For example, among the 7.4% of households who do not use the internet at home, Census surveys show about one-third indicate that price is one reason for not having an at-home connection, responding that they “can’t afford it” or that it’s “not worth the cost.” On the other hand, more than half of respondents said they “don’t need it” or are “not interested.”

But George Ford argues that these responses to the Census surveys are unhelpful in evaluating the importance of price relative to other factors. For example, if a consumer says broadband is “not worth the cost,” it’s not clear whether the “worth” is too low or the “cost” is too high. Consumers who are “not interested” in subscribing to an internet service are implicitly saying that they are not interested at current prices. In other words, there may be a price that is sufficiently low that uninterested consumers become interested.

But in some cases, that price may be zero—or even negative.

A 2022 National Telecommunications and Information Administration (NTIA) survey of internet use found that about 75% of offline households said they wanted to pay nothing for internet access. In addition, as shown in the figure above, about a quarter of households without a broadband or smartphone subscription claim that they can access the internet at home without paying for a subscription. Thus, there may be a substantial share of nonadopters who would not adopt even if the service were free to the consumer.

Aside from surveys, another way to evaluate the importance of price in internet-adoption decisions is with empirical estimates of demand elasticity. The price elasticity of demand is the percent change in the quantity demanded for a good, divided by the percent change in price. A demand curve with an elasticity between 0 and –1 is said to be inelastic, meaning the change in the quantity demanded is relatively less responsive to changes in price. An elasticity of less than –1 is said to be elastic, meaning the change in the quantity demanded is relatively more responsive to changes in price.

Michael Williams and Wei Zao’s survey of the research on the price elasticity of demand concludes that demand for internet services has traditionally been inelastic and has “become increasingly so over time.” They report a 2019 elasticity of –0.05 (down from –0.69 in 2008). George Ford’s 2021 study estimates an elasticity ranging from –0.58 to –0.33.  These results indicate that a subsidy program that reduced the price of internet services by 10% would increase adoption by anywhere from 0.5% (i.e., one-half of one percent) to 5.8%. In other words, a range from approximately zero to a small but significant increase.

It is unsurprising that the demand for internet services is so inelastic, especially among those who do not subscribe to broadband or smartphone service. One reason is the nature of demand curves. Generally speaking, as quantity demanded increases (i.e., moves downward along the demand curve), the demand curve becomes less elastic, as shown in the figure below (which is an illustration of a hypothetical demand curve). With adoption currently at more than 90% of households, the remaining nonadopters are much less likely to adopt at any price.

Thus, there is a possibility that the ACP may be so successful that the program has hit a point of significant diminishing marginal returns. Now that nearly 95% of U.S. households with access to at-home internet use at-home Internet, it may be very difficult and costly to convert the remaining 5% of nonadopters. For example, if Williams & Zao’s estimate of a price elasticity of –0.05 is correct, then even a subsidy that provided “free” Internet would convert only half of the 5% of nonadopters.

Keeping the Country Connected

With all of this in mind, it’s important to recognize that the primary metric for success should not be solely based on adoption rates.

The ACP is not an attempt to create a perfect government program, but rather to address the imperfect realities we face. Some individuals may never adopt internet services, just as some never installed home-telephone services. Even at the peak of landline use in 1998, only 96.2% of households had one.

On the other hand, those who value broadband access may be forced to discontinue service if faced with financial difficulties. Therefore, the program’s objective should encompass both connecting new users and ensuring that economically vulnerable individuals maintain access.

Instead of pursuing an ideal regulatory or subsidy program, we should focus on making the most informed decisions in a context where information is limited. We know there is general demand for internet access and that a significant number of households might discontinue services during economic downturns. And we also know that, in light of these realities, numerous stakeholders advocate for invasive interventions in the broadband market, potentially jeopardizing private investment incentives.

Thus, even if the ACP program is not perfect in itself, it goes a long way toward satisfying the need to make sure the least well-off stay connected, while also allowing private providers to continue their track record of providing high-speed, affordable broadband.

And although we do not have data at the moment demonstrating exactly how many households would discontinue internet service in the absence of subsidies, if Congress does not appropriate additional ACP funds, we may soon have an unfortunate natural experiment that helps us to find out.

Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.

Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.

The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.

In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Times reports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.

One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:

That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.

It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.

“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”

This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.

Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?

It’s not a simple question to answer.

In comments to the FCC, the International Center for Law & Economics (ICLE) notes:

The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.

This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:

A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.

To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.

States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).

All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.

There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.

Some NOFO Requirements May Impinge Broadband Deployment

The infrastructure bill’s statutory text declares that:

Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.

In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.

It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.

This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.

Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.

The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.

While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.

But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”

The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”

For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.

The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.

Focusing on the Truly Unserved and Underserved

The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.

Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.

Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:

For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”

In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.

Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.

Conclusion

The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.

Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services. 

History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it. 

Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:

Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.

The Perennial Threat of Price Controls

Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation. 

Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.” 

But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)). 

This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:

The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)

For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation

Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:

President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)

Senate Majority Leader Chuck Schumer (D-N.Y.) similarly suggested in a 2018 speech that broadband affordability should be ensured: 

[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.

And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).

Rate Regulation by Any Other Name

Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes: 

The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.

The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.

What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”

States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:

In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.

The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:

[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication

The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:

Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

The Economics of Price Controls

Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”

The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.

Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:

When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)

Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:

The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)

Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.

None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment). 

Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”: 

  1. consumer access to lawful content, applications, and services; or
  2. content providers’ ability to distribute lawful content, applications or services.

The FCC thus signaled quite clearly that it would deem many zero-rating arrangements as manifestly “unreasonable.” Yet, for mobile customers who want to consume only a limited amount of data, zero rating of popular apps or other data uses is, in most cases, a net benefit for consumer welfare

These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….

…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.

Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.

Conclusion

Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation. 

While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.

Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.

Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure framework, or BIF), which sets aside $65 billion of federal funding for broadband projects. While there is much to praise about the package’s focus on broadband deployment and adoption, whether that money will be well-spent  depends substantially on how the law is implemented and whether the National Telecommunications and Information Administration (NTIA) adopts adequate safeguards to avoid waste, fraud, and abuse. 

The primary aim of the bill’s broadband provisions is to connect the truly unconnected—what the bill refers to as the “unserved” (those lacking a connection of at least 25/3 Mbps) and “underserved” (lacking a connection of at least 100/20 Mbps). In seeking to realize this goal, it’s important to bear in mind that dynamic analysis demonstrates that the broadband market is overwhelmingly healthy, even in locales with relatively few market participants. According to the Federal Communications Commission’s (FCC) latest Broadband Progress Report, approximately 5% of U.S. consumers have no options for at least 25/3 Mbps broadband, and slightly more than 8% have no options for at least 100/10 Mbps).  

Reaching the truly unserved portions of the country will require targeting subsidies toward areas that are currently uneconomic to reach. Without properly targeted subsidies, there is a risk of dampening incentives for private investment and slowing broadband buildout. These tradeoffs must be considered. As we wrote previously in our Broadband Principles issue brief:

  • To move forward successfully on broadband infrastructure spending, Congress must take seriously the roles of both the government and the private sector in reaching the unserved.
  • Current U.S. broadband infrastructure is robust, as demonstrated by the way it met the unprecedented surge in demand for bandwidth during the recent COVID-19 pandemic.
  • To the extent it is necessary at all, public investment in broadband infrastructure should focus on providing Internet access to those who don’t have it, rather than subsidizing competition in areas that already do.
  • Highly prescriptive mandates—like requiring a particular technology or requiring symmetrical speeds— will be costly and likely to skew infrastructure spending away from those in unserved areas.
  • There may be very limited cases where municipal broadband is an effective and efficient solution to a complete absence of broadband infrastructure, but policymakers must narrowly tailor any such proposals to avoid displacing private investment or undermining competition.
  • Consumer-directed subsidies should incentivize broadband buildout and, where necessary, guarantee the availability of minimum levels of service reasonably comparable to those in competitive markets.
  • Firms that take government funding should be subject to reasonable obligations. Competitive markets should be subject to lighter-touch obligations.

The Good

The BIF’s broadband provisions ended up in a largely positive place, at least as written. There are two primary ways it seeks to achieve its goals of promoting adoption and deploying broadband to unserved/underserved areas. First, it makes permanent the Emergency Broadband Benefit program that had been created to provide temporary aid to households who struggled to afford Internet service during the COVID-19 pandemic, though it does lower the monthly user subsidy from $50 to $30. The renamed Affordable Connectivity Program can be used to pay for broadband on its own, or as part of a bundle of other services (e.g., a package that includes telephone, texting, and the rental fee on equipment).

Relatedly, the bill also subsidizes the cost of equipment by extending a one-time reimbursement of up to $100 to broadband providers when a consumer takes advantage of the provider’s discounted sale of connected devices, such as laptops, desktops, or tablet computers capable of Wi-Fi and video conferencing. 

The decision to make the emergency broadband benefit a permanent program broadly comports with recommendations we have made to employ user subsidies (such as connectivity vouchers) to encourage broadband adoption.

The second and arguably more important of the bill’s broadband provisions is its creation of the $42 billion Broadband Equity, Access and Deployment (BEAD) Program. Under the direction of the NTIA, BEAD will direct grants to state governments to help the states expand access to and use of high-speed broadband.  

On the bright side, BEAD does appear to be designed to connect the country’s truly unserved regions—which, as noted above, account for about 8% of the nation’s households. The law explicitly requires prioritizing unserved areas before underserved areas. Even where the text references underserved areas as an additional priority, it does so in a way that won’t necessarily distort private investment.  The bill also creates preferences for projects in persistent and high-poverty areas. Thus, the targeted areas are very likely to fall on the “have-not” side of the digital divide.

On its face, the subsidy and grant approach taken in the bill is, all things considered, commendable. As we note in our broadband report, care must be taken to avoid interventions that distort private investment incentives, particularly in a successful industry like broadband. The goal, after all, is more broadband deployment. If policy interventions only replicate private options (usually at higher cost) or, worse, drive private providers from a market, broadband deployment will be slowed or reversed. The approach taken in this bill attempts to line up private incentives with regulatory goals.

As we discuss below, however, the devil is in the details. In particular, BEAD’s structure could theoretically allow enough discretion in execution that a large amount of waste, fraud, and abuse could end up frustrating the program’s goals.

The Bad

While the bill largely keeps the right focus of building out broadband in unserved areas, there are reasons to question some of its preferences and solutions. For instance, the state subgrant process puts for-profit and government-run broadband solutions on an equal playing field for the purposes of receiving funds, even though the two types of entities exist in very different institutional environments with very different incentives. 

There is also a requirement that projects provide broadband of at least 100/20 Mbps speed, even though the bill defines “unserved”as lacking at least 25/3 Mbps. While this is not terribly objectionable, the preference for 100/20 could have downstream effects on the hardest-to-connect areas. It may only be economically feasible to connect some very remote areas with a 25/3 Mbps connection. Requiring higher speeds in such areas may, despite the best intentions, slow deployment and push providers to prioritize areas that are relatively easier to connect.

For comparison, the FCC’s Connect America Fund and Rural Digital Opportunity Fund programs do place greater weight in bidding for providers that can deploy higher-speed connections. But in areas where a lower speed tier is cost-justified, a provider can still bid and win. This sort of approach would have been preferable in the infrastructure bill. 

But the bill’s largest infirmity is not in its terms or aims, but in the potential for mischief in its implementation. In particular, the BEAD grant program lacks the safeguards that have traditionally been applied to this sort of funding at the FCC. 

Typically, an aid program of this sort would be administered by the FCC under rulemaking bound by the Administrative Procedure Act (APA). As cumbersome as that process may sometimes be, APA rulemaking provides a high degree of transparency that results in fairly reliable public accountability. BEAD, by contrast, eschews this process, and instead permits NTIA to work directly with governors and other relevant state officials to dole out the money.  The funds will almost certainly be distributed more quickly, but with significantly less accountability and oversight. 

A large amount of the implementation detail will be driven at the state level. By definition, this will make it more difficult to monitor how well the program’s aims are being met. It also creates a process with far more opportunities for highly interested parties to lobby state officials to direct funding to their individual pet projects. None of this is to say that BEAD funding will necessarily be misdirected, but NTIA will need to be very careful in how it proceeds.

Conclusion: The Opportunity

Although the BIF’s broadband funds are slated to be distributed next year, we may soon be able to see whether there are warning signs that the legitimate goal of broadband deployment is being derailed for political favoritism. BEAD initially grants a flat $100 million to each state; it is only additional monies over that initial amount that need to be sought through the grant program. Thus, it is highly likely that some states will begin to enact legislation and related regulations in the coming year based on that guaranteed money. This early regulatory and legislative activity could provide insight into the pitfalls the full BEAD grantmaking program will face.

The larger point, however, is that the program needs safeguards. Where Congress declined to adopt them, NTIA would do well to implement them. Obviously, this will be something short of full APA rulemaking, but the NTIA will need to make accountability and reliability a top priority to ensure that the digital divide is substantially closed.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

Image by Gerd Altmann from Pixabay

AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention.  The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.

The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded. 

This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.  

The DOJ’s Antitrust Case against the Transaction

The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects. 

This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power. 

Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers. 

Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.

The Fundamental Flaws in the DOJ’s Antitrust Case

The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons. 

False Assumption #1

The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities.  Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.

False Assumption #2

Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services.  Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.

The Antitrust Case for the Transaction

When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.

Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion. 

In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.

The Enduring Virtues of Antitrust Prudence

AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.  

Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.

In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.

The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.

The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.   

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

One of the themes that has run throughout this symposium has been that, throughout his tenure as both a commissioner and as chairman, Ajit Pai has brought consistency and careful analysis to the Federal Communications Commission (McDowell, Wright). The reflections offered by the various authors in this symposium make one thing clear: the next administration would do well to learn from the considered, bipartisan, and transparent approach to policy that characterized Chairman Pai’s tenure at the FCC.

The following are some of the more specific lessons that can be learned from Chairman Pai. In an important sense, he laid the groundwork for his successful chairmanship when he was still a minority commissioner. His thoughtful dissents were rooted in consistent, clear policy arguments—a practice that both charted how he would look at future issues as chairman and would help the public to understand exactly how he would approach new challenges before the FCC (McDowell, Wright).

One of the most public instances of Chairman Pai’s consistency (and, as it turns out, his bravery) was with respect to net neutrality. From his dissent in the Title II Order, through his commission’s Restoring Internet Freedom Order, Chairman Pai focused on the actual welfare of consumers and the factors that drive network growth and adoption. As Brent Skorup noted, “Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition.” The result of giving in to the Title II advocates would have been to draw the FCC into a quagmire of mass-media regulation that would ultimately harm free expression and broadband deployment in the United States.

Chairman Pai’s vision worked out (Skorup, May, Manne, Hazlett). Despite prognostications of the “death of the internet” because of the Restoring Internet Freedom Order, available evidence suggests that industry investment grew over Chairman Pai’s term. More Americans are connected to broadband than ever before.

Relatedly, Chairman Pai was a strong supporter of liberalizing media-ownership rules that long had been rooted in 20th century notions of competition (Manne). Such rules systematically make it harder for smaller media outlets to compete with large news aggregators and social-media platforms. As Geoffrey Manne notes: 

Consistent with his unwavering commitment to promote media competition… Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers.

This was a bold move for Chairman Pai—in essence, he permitted more local concentration by, e.g., allowing the purchase of a newspaper by a local television station that previously would have been forbidden. By allowing such combinations, the FCC enabled failing local news outlets to shore up their losses and continue to compete against larger, better-resourced organizations. The rule changes are in a case pending before the Supreme Court; should the court find for the FCC, the competitive outlook for local media looks much better thanks to Chairman Pai’s vision.

Chairman Pai’s record on spectrum is likewise impressive (Cooper, Hazlett). The FCC’s auctions under Chairman Pai raised more money and freed more spectrum for higher value uses than any previous commission (Feld, Hazlett). But there is also a lesson in how subsequent administrations can continue what Chairman Pai started. Unlicensed use, for instance, is not free or costless in its maintenance, and Tom Hazlett believes that there is more work to be done in further liberalizing access to the related spectrum—liberalizing in the sense of allowing property rights and market processes to guide spectrum to its highest use:

The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models.

And to a large extent this is the model that Chairman Pai set down, from the issuance of the 12 GHZ NPRM to consider whether those spectrum bands could be opened up for wireless use, to the L-Band Order, where the commission worked hard to reallocate spectrum rights in ways that would facilitate more productive uses.

The controversial L-Band Order was another example of where Chairman Pai displayed both political acumen as well as an apolitical focus on improving spectrum policy (Cooper). Political opposition was sharp and focused after the commission finalized its order in April 2020. Nonetheless, Chairman Pai was deftly able to shepherd the L-Band Order and guarantee that important spectrum was made available for commercial wireless use.

As a native of Kansas, rural broadband rollout ranked highly in the list of priorities at the Pai FCC, and his work over the last four years is demonstrative of this pride of place (Hurwitz, Wright). As Gus Hurwitz notes, “the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity.”

Further, other work, like the recently completed Rural Digital Opportunity Fund auction and the 5G fund provide the necessary policy framework with which to extend greater connectivity to rural America. As Josh Wright notes, “Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind.” This focus on closing the digital divide yielded gains in connectivity in places outside of traditional rural American settings, such as tribal lands, the U.S. Virgin Islands, and Puerto Rico (Wright).

But perhaps one of Chairman Pai’s best and (hopefully) most lasting contributions will be de-politicizing the FCC and increasing the transparency with which it operated. In contrast to previous administrations, the Pai FCC had an overwhelmingly bipartisan nature, with many bipartisan votes being regularly taken at monthly meetings (Jamison). In important respects, it was this bipartisan (or nonpartisan) nature that was directly implicated by Chairman Pai championing the Office of Economics and Analytics at the commission. As many of the commentators have noted (Jamison, Hazlett, Wright, Ellig) the OEA was a step forward in nonpolitical, careful cost-benefit analysis at the commission. As Wright notes, Chairman Pai was careful to not just hire a bunch of economists, but rather to learn from other agencies that have better integrated economics, and to establish a structure that would enable the commission’s economists to materially contribute to better policy.

We were honored to receive a post from Jerry Ellig just a day before he tragically passed away. As chief economist at the FCC from 2017-2018, he was in a unique position to evaluate past practice and participate in the creation of the OEA. According to Ellig, past practice tended to treat the work of the commission’s economists as a post-hoc gloss on the work of the agency’s attorneys. Once conclusions were reached, economics would often be backfilled in to support those conclusions. With the establishment of the OEA, economics took a front-seat role, with staff of that office becoming a primary source for information and policy analysis before conclusions were reached. As Wright noted, the Federal Trade Commission had adopted this approach. With the FCC moving to do this as well, communications policy in the United States is on much sounder footing thanks to Chairman Pai.

Not only did Chairman Pai push the commission in the direction of nonpolitical, sound economic analysis but, as many commentators note, he significantly improved the process at the commission (Cooper, Jamison, Lyons). Chief among his contributions was making it a practice to publish proposed orders weeks in advance, breaking with past traditions of secrecy around draft orders, and thereby giving the public an opportunity to see what the commission intended to do.

Critics of Chairman Pai’s approach to transparency feared that allowing more public view into the process would chill negotiations between the commissioners behind the scenes. But as Daniel Lyons notes, the chairman’s approach was a smashing success:

The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan compared to 33% and 69.9%, respectively, under Chairman Wheeler.

Other reforms from Chairman Pai helped open the FCC to greater scrutiny and a more transparent process, including limiting editorial privileges on staff on an order’s text, and by introducing the use of a simple “fact sheet” to explain orders (Lyons).

I found one of the most interesting insights into the character of Chairman Pai, was his willingness to reverse course and take risks to ensure that the FCC promoted innovation instead of obstructing it by relying on received wisdom (Nachbar). For instance, although he was initially skeptical of the prospects of Space X to introduce broadband through its low-Earth-orbit satellite systems, under Chairman Pai, the Starlink beta program was included in the RDOF auction. It is not clear whether this was a good bet, Thomas Nachbar notes, but it was a statement both of the chairman’s willingness to change his mind, as well as to not allow policy to remain in a comfortable zone that excludes potential innovation.

The next chair has an awfully big pair of shoes (or one oversized coffee mug) to fill. Chairman Pai established an important legacy of transparency and process improvement, as well as commitment to careful, economic analysis in the business of the agency. We will all be well-served if future commissions follow in his footsteps.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas W. Hazlett is the H.H. Macaulay Endowed Professor of Economics at Clemson University.]

Disclosure: The one time I met Ajit Pai was when he presented a comment on my book, “The Political Spectrum,” at a Cato Institute forum in 2018. He was gracious, thorough, and complimentary. He said that while he had enjoyed the volume, he hoped not to appear in upcoming editions. I took that to imply that he read the book as harshly critical of the Federal Communications Commission. Well, when merited, I concede. But it left me to wonder if he had followed my story to its end, as I document the success of reforms launched in recent decades and advocate their extension. Inclusion in a future edition might work out well for a chairman’s legacy. Or…

While my comment here focuses on radio-spectrum allocation, there was a notable reform achieved during the Pai FCC that touches on the subject, even if far more general in scope. In January 2018, the commission voted to initiate an Office of Economics and Analytics.[1] The organizational change was expeditiously instituted that same year, with the new unit stood up under the leadership of FCC economist Giulia McHenry.[2]  I long proposed an FCC “Office of Economic Analysis” on the grounds that it had a reasonable prospect of improving evidence-based policymaking, allowing cost-benefit calculations to be made in a more professional, independent, and less political context.[3]  I welcome this initiative by the Pai FCC and look forward to the empirical test now underway.[4] 

Big Picture

Spectrum policy had notable triumphs under Chairman Pai but was—as President Carter dubbed the Vietnam War—an “incomplete success.” The main cause for celebration was the campaign to push spectrum-access rights into the marketplace. Pai’s public position was straightforward: “Our spectrum strategy calls for making low-band, mid-band, and high-band airwaves available for flexible use,” he wrote in an FCC blog post on June 19, 2018. But the means used by regulators to pursue that policy agenda repeatedly, historically prove determinative. The Pai FCC traveled pathways both effective and ineffective, and we should learn from either. The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models. The traditional spectrum-allocation approach is to permit exactly what the FCC finds to be the best use of spectrum, but this assumes knowledge about the value of alternatives the regulator does not possess. Moreover, it assumes away the costs of regulators imposing their solutions over and above a competitive process that might have less direction but more freedom. In a 2017 notice, the FCC displayed the progress we have made in departing from administrative control, when it sought guidance from private sector commenters this way:

“Are there opportunities to incentivize relocation or repacking of incumbent licensees to make spectrum available for flexible broadband use?

We seek comment on whether auctions … could be used to increase the availability of flexible use spectrum?”

By focusing on how rights—not markets—should be structured, the FCC may side-step useless food fights and let social progress flow.[5]

Progress

Spectrum-allocation results were realized. Indeed, when one looks at the pattern in licensed and unlicensed allocations for “flexible use” under 10 GHz, the recent four-year interval coincides with generous increases, both absolutely and from trend. See Figure 1. These data feature expansions in bandwidth via liberal licenses that include 70 MHz for CBRS (3.5 GHz band), with rights assigned in Auction 105 (2020), and 280 MHz (3.7 – 3.98 GHz) assigned in Auction 107 (2020-21, soon to conclude). The 70 MHz added via Auction 1002 (600 MHz) in 2017 was accounted for during the previous FCC, but substantial bandwidth in Auctions 101, 102, and 103 was added in the millimeter wave bands (not shown in Figure 1, which focuses on low- and mid-band rights).[6]  Meanwhile, multiple increments of unlicensed spectrum allocations were made in 2020: 30 MHz shifted from the Intelligent Transportation Services set-aside (5.9 GHz) in 2020, 80 MHz of CBRS in 2020, and 1,200 MHz (6 GHz) dedicated to Wi-Fi type services in 2020.[7]  Substantial millimeter wave frequency space was previously set aside for unlicensed operations in 2016.[8]

Source: FCC and author’s calculations.

First, that’s not the elephant in the room. Auction 107 has assigned licenses allocated 280 MHz of flexible-use mid-band spectrum, producing at least $94 billion in gross bids (of which about $13 billion will be paid to incumbent satellite licensees to reconfigure their operations so as to occupy just 200 MHz, rather than 500 MHz, of the 3.7 – 4.2 GHz band).[9]  This crushes previous FCC sales; indeed, it constitutes about 42% of all auction receipts:

  • FCC auction receipts, 1994-2019: $117 billion[10]
  • FCC auction receipts, 2020 (Auctions 103 and 105): $12.1 billion
  • FCC auction winning bids, 2020 (Auction 107): $94 billion (gross bids including relocation costs, incentive payments, and before Assignment Phase payments)

The addition of the 280 MHz to existing flexible-use spectrum suitable for mobile (aka, Commercial Mobile Radio Services – CMRS) is the largest increment ever released. It will compose about one-fourth of the low- and mid-band frequencies available via liberal licenses. This constitutes a huge advance with respect to 5G deployments, but going much further—promoting competition, innovation in apps and devices, the Internet of Things, and pushing the technological envelope toward 6G and beyond. Notably, the U.S. has uniquely led this foray to a new frontier in spectrum allocation.

The FCC deserves praise for pushing this proceeding to fruition. So, here it is. The C-Band is a very big deal and a major policy success. And more: in Auction 107, the commission very wisely sold overlay rights. It did not wait for administrative procedures to reconfigure wireless use, tightly supervising new “sharing” of the band, but (a) accepted the incumbents’ basic strategy for reallocation, (b) sold new prospective rights to high bidders, subject to protection of incumbents, (c) used a fraction of proceeds to fund incumbents cooperating with the reallocation, plussing-up payments when hitting deadlines, and (d) implicitly relied on the new licensees to push the relocation process forward.

Challenges

It is interesting that the FCC sort of articulated this useful model, and sort of did not:

For a successful public auction of overlay licenses in the 3.7-3.98 GHz band, bidders need to know before an auction commences when they will get access to that currently occupied spectrum as well as the costs they will incur as a condition of their overlay license. (FCC C-Band Order [Feb. 7, 2020], par. 110)

A germ of truth, but note: Auction 107 also demonstrated just the reverse. Rights were sold prior to clearing the airwaves and bidders—while liable for “incentive payments”—do not know with certainty when the frequencies will be available for their use. Risk is embedded, as it is widely in financial assets (corporate equity shares are efficiently traded despite wide disagreement on future earnings), and yet markets perform. Indeed, the “certainty” approach touted by the FCC in their language about a “successful public auction” has long deterred efficient reallocations, as the incumbents’ exiting process holds up arrival of the entrants. The central feature of the C-Band reallocation was not to create certainty, but to embed an overlay approach into the process. This draws incumbents and entrants together into positive-sum transactions (mediated by the FCC are party-to-party) where they cooperate to create new productive opportunities, sharing the gains.  

The inspiration for the C-Band reallocation of satellite spectrum was bottom-up. As with so much of the radio spectrum, the band devoted to satellite distribution of video (relays to and from an array of broadcast and cable TV systems and networks) was old and tired. For decades, applications and systems were locked in by law. They consumed lots of bandwidth while ignoring the emergence of newer technologies like fiber optics (emphasis to underscore that products launched in the 1980s are still cutting-edge challenges for 2021 Spectrum Policy). Spying this mismatch, and seeking gains from trade, creative risk-takers petitioned the FCC.

In a mid-2017 request, computer chipmaker Intel and C-Band satellite carrier Intelsat (no corporate relationship) joined forces to ask for permission to expand the scope of satellite licenses. The proffered plan was for license holders to invest in spectrum economies by upgrading satellites and earth stations—magically creating new, unoccupied channels in prime mid-band frequencies perfect for highly valuable 5G services. All existing video transport services would continue, while society would enjoy way more advanced wireless broadband. All regulators had to do was allow “change of use” in existing licenses. Markets would do the rest: satellite operators would make efficient multi-billion-dollar investments, coordinating with each other and their customers, and then take bids from new users itching to access the prime 4 GHz spectrum. The transition to bold, new, more valuable applications would compensate legacy customers and service providers.

This “spectrum sharing” can spin gold – seizing on capitalist discovery and demand revelation in market bargains. Voila, the 21st century, delivered.

Well, yes and no. At first, the FCC filing was a yawner, the standard bureaucratic response. But this one took off took off when Chairman Pai—alertly, and in the public interest—embraced the proposal, putting it on the July 12, 2018 FCC meeting agenda. Intelsat’s market cap jumped from about $500 million to over $4.5 billion—the value of the spectrum it was using was worth far more than the service it was providing, and the prospect that it might realize some substantial fraction of the resource revaluation was visible evidence.[11] 

While the Pai FCC leaned in the proper policy direction, politics soon blew the process down. Congress denounced the “private auction” as a “windfall,” bellowing against the unfairness of allowing corporations (some foreign-owned!) to cash out. The populist message was upside-down. The social damage created by mismanagement of spectrum—millions of Americans paying more and getting less from wireless than otherwise, robbing ordinary citizens of vast consumer surplus—was being fixed by entrepreneurial initiative. Moreover, the public gains (lower prices plus innovation externalities spun off from liberated bandwidth) was undoubtedly far greater than any rents captured by the incumbent licensees. And a great bonus to spur future progress: rewards for those parties initiating and securing efficiency-enhancing rights will unleash vastly more productive activity.

But the populist winds—gale force and bipartisan—spun the FCC.

It was legally correct that Intelsat and its rival satellite carriers did not own the spectrum allocated to the C-Band. Indeed, that was root of the problem. And here’s a fatal catch: in applying for broader spectrum property rights, they revealed a valuable discovery. The FCC, posing as referee, turned competitor and appropriated the proffered business plan on behalf of its client (the U.S. government), and then auctioned it to bidders. Regulators did tip the incumbents, whose help was still needed in reorganizing the C-Band, setting $3.3 billion as a fair price for “moving costs” (changing out technology to reduce their transmission footprints) and dangled another $9.7 billion in “incentive payments” not to dilly dally. In total, carriers have bid some $93.9 billion, or $1.02 per MHz-Pop.[12] This is 4.7 times the price paid for the Priority Access Licenses (PALs) allocated 70 MHz in Auction 105 earlier in 2020.

The TOTM assignment was not to evaluate Ajit Pai but to evaluate the Pai FCC and its spectrum policies. On that scale, great value was delivered by the Intel-Intelsat proposal, and the FCC’s alert endorsement, offset in some measure by the long-term losses that will likely flow from the dirigiste retreat to fossilized spectrum rights controlled by diktat.

Sharing Nicely

And that takes us to 2020’s Auction 105 (Citizens Broadband Radio Services, CBRS). The U.S. has lagged much of the world in allocating flexible-use spectrum rights in the 3.5 GHz band. Ireland auctioned rights to use 350 MHz in May 2017 and many countries did likewise between then and 2020, distributing far more than the 70 MHz allocated to the Priority Access Licenses (PALs); 150 MHz to 390 MHz is the range. The Pai FCC can plausibly assign the lag to “preexisting conditions.” Here, however, I will stress that the Pai FCC did not substantially further our understanding of the costs of “spectrum sharing” under coordinating devices imposed by the FCC.

All commercially valuable spectrum bands are shared. The most intensely shared, in the relevant economic sense, are those bands curated by mobile carriers. These frequencies are complemented by extensive network capital supplied by investors, and permit millions of users—including international roamers—to gain seamless connectivity. Unlicensed bands, alternatively, tend to separate users spatially, powering down devices to localize footprints. These limits work better in situations where users desire short transmissions, like a Bluetooth link from iPhone to headphone or when bits can be handed off to a wide area network by hopping 60 feet to a local “hot spot.” The application of “spectrum sharing” to imply a non-exclusive (or unlicensed) rights regime is, at best, highly misleading. Whenever conditions of scarcity exist, meaning that not all uses can be accommodated without conflict, some rationing follows. It is commonly done by price, behavioral restriction, or both.

In CBRS, the FCC has imposed three layers of “priority” access across the 3550-3700 MHz band. Certain government radars are assumed to be fixed and must be protected. When in use, these systems demand other wireless services stay silent on particular channels. Next in line are PAL owners, parties which have paid for exclusivity, but which are not guaranteed access to a given channel. These rights, which sold for about $4.5 billion, are allocated dynamically by a controller (a Spectrum Access System, or SAS). The radios and networks used automatically and continuously check in to obtain spectrum space permissions. Seven PALs, allocated 10 MHz each, have been assigned, 70 MHz in total. Finally, General Access Authorizations (GAA) are given without limit or exclusivity to radio devices across the 80 MHz remaining in the band plus any PALs not in use. Some 5G phones are already equipped to use such bands on an unlicensed basis.

We shall see how the U.S. system works in comparison to alternatives. What is important to note is that the particular form of “spectrum sharing” is neither necessary nor free. As is standard outside the U.S., exclusive rights analogous to CMRS licenses could have been auctioned here, with U.S. government radars given vested rights.

One point that is routinely missed is that the decision to have the U.S. government partition the rights in three layers immediately conceded that U.S. government priority applications (for radar) would never shift. That is asserted as though it is a proposition that needs no justification, but it is precisely the sort of impediment to efficiency that has plagued spectrum reallocations for decades. It was, for instance, the 2002 assumption behind TV “white spaces”—that 402 MHz of TV Band frequencies was fixed in place, that the unused channels could never be repackaged and sold as exclusive rights and diverted to higher-valued uses. That unexamined assertion was boldly run then, as seen in the reduction of the band from 402 MHz to 235 MHz following Auctions 73 (2008) and 1001/1002 (2016-17), as well as in the clear possibility that remaining TV broadcasts could today be entirely transferred to cable, satellite, and OTT broadband (as they have already, effectively, been). The problem in CBRS is that the rights now distributed for the 80 MHz of unlicensed, with its protections of certain priority services, does not sprinkle the proper rights into the market such that positive-sum transitions can be negotiated. We’re stuck with whatever inefficiencies this “preexisting condition” of the 3.5 GHz might endow, unless another decadelong FCC spectrum allocation can move things forward.[13]

Already visible is that the rights sold as PALs in CBRS are only about 20% of the value of rights sold in the C-Band. This differential reflects the power restrictions and overhead costs embedded in the FCC’s sharing rules for CBRS (involving dynamic allocation of the exclusive access rights conveyed in PALs) but avoided in C-Band. In the latter, the sharing arrangements are delegated to the licensees. Their owners reveal that they see these rights as more productive, with opportunities to host more services.

There should be greater recognition of the relevant trade-offs in imposing coexistence rules. Yet, the Pai FCC succumbed in 5.9 GHz and in the 6 GHz bands to the tried-and-true options of Regulation Past. This was hugely ironic in the former, where the FCC had in 1999 imposed unlicensed access under rules that favored specific automotive informatics—Dedicated Short-Range Communications (DSRC)—that proved a 20-year bust. In diagnosing this policy blunder, the FCC then repeated it, splitting off a 45 MHz band with Wi-Fi-friendly unlicensed rules, and leaving 30 MHz to continue as the 1999 set-aside for DSRC. A liberalization of rights that would have allowed for a “private auction” to change the use of the band would have been the preferred approach. Instead, we are left with a partition of the band into rival rule regimes again established by administrative fiat.

This approach was then again imposed in the large 1.2 GHz unlicensed allocation surrounding 6 GHz, making a big 2020 splash. The FCC here assumed, categorically, that unlicensed rules are the best way to sponsor spectrum coordination. It ignores the costs of that coordination. And the commission appears to forget the progress it has made with innovative policy solutions, pulling in market forces through “overlay” licenses. These useful devices were used, in one form or another, to reallocate spectrum in for 2G in Auction 4, AWS in Auction 66, millimeter bands in Auctions 102 and 103, the “TV Incentive Auction,” the satellite C-Band in Auction 107, and have recently appeared as star players in the January 2021 FCC plan to rationalize the complex mix of rights scattered around the 2.5 GHz band.[14]  Too complicated for administrators to figure out, it could be transactionally more efficient to let market competitors figure this out.

The Future

The re-allocations in 5.9 GHz and the 6 GHz bands may yet host productive services. One can hope. But how will regulators know that the options allowed, and taken, are superior to what alternatives—suppressed by law for the next five, 10, 20 years—might have emerged had competitors had the right to test business models or technologies disfavored by the regulators best laid plans. That is the thinking that locked in the TV band, the C-Band for Satellites, and the ITS Band. It’s what we learned to be problematic throughout the Political Radio Spectrum. We shall see, as Chairman Pai speculated, what future chapters these decisions leave for future editions.


[1]   https://www.fcc.gov/document/fcc-votes-establish-office-economics-analytics-0

[2]   https://www.fcc.gov/document/fcc-opens-office-economics-and-analytics

[3]   Thomas Hazlett, Economic Analysis at the Federal Communications Commission: A Simple Proposal to Atone for Past Sins, Resources for the Future Discussion Paper 11-23(May 2011);David Honig, FCC Reorganization: How Replacing Silos with Functional Organization Would Advance Civil Rights, 3 University of Pennsylvania Journal of Law and Public Affairs 18 (Aug. 2018). 

[4] It is with great sadness that Jerry Ellig, the 2017-18 FCC Chief Economist who might well offer the most careful analysis of such a structural reform, will not be available for the task – one which he had already begun, writing this recent essay with two other FCC Chief Economists: Babette Boliek, Jerry Ellig and Jeff Prince, Improved economic analysis should be lasting part of Pai’s FCC legacy, The Hill (Dec. 29, 2020).  Jerry’s sudden passing, on January 21, 2021, is a deep tragedy.  Our family weeps for his wonderful wife, Sandy, and his precious daughter, Kat. 

[5]  As argued in: Thomas Hazlett, “The best way for the FCC to enable a 5G future,” Reuters (Jan. 17, 2018).

[6]  In 2018-19, FCC Auctions 101 and 102 offered licenses allocated 1,550 MHz of bandwidth in the 24 GHz and 28 GHz bands, although some of the bandwidth had previously been assigned and post-auction confusion over interference with adjacent frequency uses (in 24 GHz) has impeded some deployments.  In 2020, Auction 103 allowed competitive bidding for licenses to use 37, 39, and 47 GHz frequencies, 3400 MHz in aggregate.  Net proceeds to the FCC in 101, 102 and 103 were:  $700.3 million, $2.02 billion, and $7.56 billion, respectively.

[7]   I estimate that some 70 MHz of unlicensed bandwidth, allocated for television white space devices, was reduced pursuant to the Incentive Auction in 2017.  This, however, was baked into spectrum policy prior to the Pai FCC.

[8]   Notably, 64-71 GHz was allocated for unlicensed radio operations in the Spectrum Frontiers proceeding, adjacent to the 57-64 GHz unlicensed bands.  See Use of Spectrum Bands Above 24 GHz For Mobile Radio Services, et al., Report and Order and Further Notice of Proposed Rulemaking, 31 FCC Rcd 8014 (2016), 8064-65, para. 130.

[9]   The revenues reflect bids made in the Clock phase of Auction 107.  An Assignment Phase has yet to occur as of this writing.

[10]  The 2021 FCC Budget request, p. 34: “As of December 2019, the total amount collected for broader government use and deficit reduction since 1994 exceeds $117 billion.” 

[11]   Kerrisdale Management issued a June 2018 report that tied the proceeding to a dubious source: “to the market-oriented perspective on spectrum regulation – as articulated, for instance, by the recently published book The Political Spectrum by former FCC chief economist Thomas Winslow Hazlett – [that] the original sin of the FCC was attempting to dictate from on high what licensees should or shouldn’t do with their spectrum. By locking certain bands into certain uses, with no simple mechanism for change or renegotiation, the agency guaranteed that, as soon as technological and commercial realities shifted – as they do constantly – spectrum use would become inefficient.” 

[12]   Net proceeds will be reduced to reflect bidding credits extended small businesses, but additional bids will be received in the Assignment Phase of Auction 107, still to be held. Likely totals will remain somewhere around current levels. 

[13]  The CBRS band is composed of frequencies at 3550-3700 MHz.  The top 50 MHz of that band was officially allocated in 2005 in a proceeding that started years earlier.  It was then curious that the adjacent 100 MHz was not included. 

[14] FCC Seeks Comment on Procedures for 2.5 GHz Reallocation (Jan. 13, 2021).

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas B. Nachbar is a professor of law at the University of Virginia School of Law and a senior fellow at the Center for National Security Law.]

It would be impossible to describe Ajit Pai’s tenure as chair of the Federal Communications Commission as ordinary. Whether or not you thought his regulatory style or his policies were innovative, his relationship with the public has been singular for an FCC chair. His Reese’s mug, alone, has occupied more space in the American media landscape than practically any past FCC chair. From his first day, he has attracted consistent, highly visible criticism from a variety of media outlets, although at least John Oliver didn’t describe him as a dingo. Just today, I read that Ajit Pai single handedly ruined the internet, which when I got up this morning seemed to be working pretty much the same way it was four years ago.

I might be biased in my view of Ajit. I’ve known him since we were law school classmates, when he displayed the same zeal and good-humored delight in confronting hard problems that I’ve seen in him at the commission. So I offer my comments not as an academic and student of FCC regulation, but rather as an observer of the communications regulatory ecosystem that Ajit has dominated since his appointment. And while I do not agree with everything he’s done at the commission, I have admired his single-minded determination to pursue policies that he believes will expand access to advanced telecommunications services. One can disagree with how he’s pursued that goal—and many have—but characterizing his time as chair in any other way simply misses the point. Ajit has kept his eye on expanding access, and he has been unwavering in pursuit of that objective, even when doing so has opened him to criticism, which is the definition of taking political risk.

Thus, while I don’t think it’s going to be the most notable policy he’s participated in at the commission, I would like to look at Ajit’s tenure through the lens of one small part of one fairly specific proceeding: the commission’s decision to include SpaceX as a low-latency provider in the Rural Digital Opportunity Fund (RDOF) Auction.

The decision to include SpaceX is at one level unremarkable. SpaceX proposes to offer broadband internet access through low-Earth-orbit satellites, which is the kind of thing that is completely amazing but is becoming increasingly un-amazing as communications technology advances. SpaceX’s decision to use satellites is particularly valuable for initiatives like the RDOF, which specifically seek to provide services where previous (largely terrestrial) services have not. That is, in fact, the whole point of the RDOF, a point that sparked fiery debate over the FCC’s decision to focus the first phase of the RDOF on areas with no service rather than areas with some service. Indeed, if anything typifies the current tenor of the debate (at the center of which Ajit Pai has resided since his confirmation as chair), it is that a policy decision over which kind of under-served areas should receive more than $16 billion in federal funding should spark such strongly held views. In the end, SpaceX was awarded $885.5 million to participate in the RDOF, almost 10% of the first-round funds awarded.

But on a different level, the decision to include SpaceX is extremely remarkable. Elon Musk, SpaceX’s pot-smoking CEO, does not exactly fit regulatory stereotypes. (Disclaimer: I personally trust Elon Musk enough to drive my children around in one of his cars.) Even more significantly, SpaceX’s Starlink broadband service doesn’t actually exist as a commercial product. If you go to Starlink’s website, you won’t find a set of splashy webpages featuring products, services, testimonials, and a variety of service plans eager for a monthly assignation with your credit card or bank account. You will be greeted with a page asking for your email and service address in case you’d like to participate in Starlink’s beta program. In the case of my address, which is approximately 100 miles from the building where the FCC awarded SpaceX over $885 million to participate in the RDOF, Starlink is not yet available. I will, however, “be notified via email when service becomes available in your area,” which is reassuring but doesn’t get me any closer to watching cat videos.

That is perhaps why Chairman Pai was initially opposed to including SpaceX in the low-latency portion of the RDOF. SpaceX was offering unproven technology and previous satellite offerings had been high-latency, which is good for some uses but not others.

But then, an even more remarkable thing happened, at least in Washington: a regulator at the center of a controversial issue changed his mind and—even more remarkably—admitted his decision might not work out. When the final order was released, SpaceX was allowed to bid for low-latency RDOF funds even though the commission was “skeptical” of SpaceX’s ability to deliver on its low-latency promise. Many doubted that SpaceX would be able to effectively compete for funds, but as we now know, that decision led to SpaceX receiving a large share of the Phase I funds. Of course, that means that if SpaceX doesn’t deliver on its latency promises, a substantial part of the RDOF Phase I funds will fail to achieve their purpose, and the FCC will have backed the wrong horse.

I think we are unlikely to see such regulatory risk-taking, both technically and politically, in what will almost certainly be a more politically attuned commission in the coming years. Even less likely will be acknowledgments of uncertainty in the commission’s policies. Given the political climate and the popular attention policies like network neutrality have attracted, I would expect the next chair’s views about topics like network neutrality to exhibit more unwavering certainty than curiosity and more resolve than risk-taking. The most defining characteristic of modern communications technology and markets is change. We are all better off with a commission in which the other things that can change are minds.