Archives For Federal Communications Commission

Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure framework, or BIF), which sets aside $65 billion of federal funding for broadband projects. While there is much to praise about the package’s focus on broadband deployment and adoption, whether that money will be well-spent  depends substantially on how the law is implemented and whether the National Telecommunications and Information Administration (NTIA) adopts adequate safeguards to avoid waste, fraud, and abuse. 

The primary aim of the bill’s broadband provisions is to connect the truly unconnected—what the bill refers to as the “unserved” (those lacking a connection of at least 25/3 Mbps) and “underserved” (lacking a connection of at least 100/20 Mbps). In seeking to realize this goal, it’s important to bear in mind that dynamic analysis demonstrates that the broadband market is overwhelmingly healthy, even in locales with relatively few market participants. According to the Federal Communications Commission’s (FCC) latest Broadband Progress Report, approximately 5% of U.S. consumers have no options for at least 25/3 Mbps broadband, and slightly more than 8% have no options for at least 100/10 Mbps).  

Reaching the truly unserved portions of the country will require targeting subsidies toward areas that are currently uneconomic to reach. Without properly targeted subsidies, there is a risk of dampening incentives for private investment and slowing broadband buildout. These tradeoffs must be considered. As we wrote previously in our Broadband Principles issue brief:

  • To move forward successfully on broadband infrastructure spending, Congress must take seriously the roles of both the government and the private sector in reaching the unserved.
  • Current U.S. broadband infrastructure is robust, as demonstrated by the way it met the unprecedented surge in demand for bandwidth during the recent COVID-19 pandemic.
  • To the extent it is necessary at all, public investment in broadband infrastructure should focus on providing Internet access to those who don’t have it, rather than subsidizing competition in areas that already do.
  • Highly prescriptive mandates—like requiring a particular technology or requiring symmetrical speeds— will be costly and likely to skew infrastructure spending away from those in unserved areas.
  • There may be very limited cases where municipal broadband is an effective and efficient solution to a complete absence of broadband infrastructure, but policymakers must narrowly tailor any such proposals to avoid displacing private investment or undermining competition.
  • Consumer-directed subsidies should incentivize broadband buildout and, where necessary, guarantee the availability of minimum levels of service reasonably comparable to those in competitive markets.
  • Firms that take government funding should be subject to reasonable obligations. Competitive markets should be subject to lighter-touch obligations.

The Good

The BIF’s broadband provisions ended up in a largely positive place, at least as written. There are two primary ways it seeks to achieve its goals of promoting adoption and deploying broadband to unserved/underserved areas. First, it makes permanent the Emergency Broadband Benefit program that had been created to provide temporary aid to households who struggled to afford Internet service during the COVID-19 pandemic, though it does lower the monthly user subsidy from $50 to $30. The renamed Affordable Connectivity Program can be used to pay for broadband on its own, or as part of a bundle of other services (e.g., a package that includes telephone, texting, and the rental fee on equipment).

Relatedly, the bill also subsidizes the cost of equipment by extending a one-time reimbursement of up to $100 to broadband providers when a consumer takes advantage of the provider’s discounted sale of connected devices, such as laptops, desktops, or tablet computers capable of Wi-Fi and video conferencing. 

The decision to make the emergency broadband benefit a permanent program broadly comports with recommendations we have made to employ user subsidies (such as connectivity vouchers) to encourage broadband adoption.

The second and arguably more important of the bill’s broadband provisions is its creation of the $42 billion Broadband Equity, Access and Deployment (BEAD) Program. Under the direction of the NTIA, BEAD will direct grants to state governments to help the states expand access to and use of high-speed broadband.  

On the bright side, BEAD does appear to be designed to connect the country’s truly unserved regions—which, as noted above, account for about 8% of the nation’s households. The law explicitly requires prioritizing unserved areas before underserved areas. Even where the text references underserved areas as an additional priority, it does so in a way that won’t necessarily distort private investment.  The bill also creates preferences for projects in persistent and high-poverty areas. Thus, the targeted areas are very likely to fall on the “have-not” side of the digital divide.

On its face, the subsidy and grant approach taken in the bill is, all things considered, commendable. As we note in our broadband report, care must be taken to avoid interventions that distort private investment incentives, particularly in a successful industry like broadband. The goal, after all, is more broadband deployment. If policy interventions only replicate private options (usually at higher cost) or, worse, drive private providers from a market, broadband deployment will be slowed or reversed. The approach taken in this bill attempts to line up private incentives with regulatory goals.

As we discuss below, however, the devil is in the details. In particular, BEAD’s structure could theoretically allow enough discretion in execution that a large amount of waste, fraud, and abuse could end up frustrating the program’s goals.

The Bad

While the bill largely keeps the right focus of building out broadband in unserved areas, there are reasons to question some of its preferences and solutions. For instance, the state subgrant process puts for-profit and government-run broadband solutions on an equal playing field for the purposes of receiving funds, even though the two types of entities exist in very different institutional environments with very different incentives. 

There is also a requirement that projects provide broadband of at least 100/20 Mbps speed, even though the bill defines “unserved”as lacking at least 25/3 Mbps. While this is not terribly objectionable, the preference for 100/20 could have downstream effects on the hardest-to-connect areas. It may only be economically feasible to connect some very remote areas with a 25/3 Mbps connection. Requiring higher speeds in such areas may, despite the best intentions, slow deployment and push providers to prioritize areas that are relatively easier to connect.

For comparison, the FCC’s Connect America Fund and Rural Digital Opportunity Fund programs do place greater weight in bidding for providers that can deploy higher-speed connections. But in areas where a lower speed tier is cost-justified, a provider can still bid and win. This sort of approach would have been preferable in the infrastructure bill. 

But the bill’s largest infirmity is not in its terms or aims, but in the potential for mischief in its implementation. In particular, the BEAD grant program lacks the safeguards that have traditionally been applied to this sort of funding at the FCC. 

Typically, an aid program of this sort would be administered by the FCC under rulemaking bound by the Administrative Procedure Act (APA). As cumbersome as that process may sometimes be, APA rulemaking provides a high degree of transparency that results in fairly reliable public accountability. BEAD, by contrast, eschews this process, and instead permits NTIA to work directly with governors and other relevant state officials to dole out the money.  The funds will almost certainly be distributed more quickly, but with significantly less accountability and oversight. 

A large amount of the implementation detail will be driven at the state level. By definition, this will make it more difficult to monitor how well the program’s aims are being met. It also creates a process with far more opportunities for highly interested parties to lobby state officials to direct funding to their individual pet projects. None of this is to say that BEAD funding will necessarily be misdirected, but NTIA will need to be very careful in how it proceeds.

Conclusion: The Opportunity

Although the BIF’s broadband funds are slated to be distributed next year, we may soon be able to see whether there are warning signs that the legitimate goal of broadband deployment is being derailed for political favoritism. BEAD initially grants a flat $100 million to each state; it is only additional monies over that initial amount that need to be sought through the grant program. Thus, it is highly likely that some states will begin to enact legislation and related regulations in the coming year based on that guaranteed money. This early regulatory and legislative activity could provide insight into the pitfalls the full BEAD grantmaking program will face.

The larger point, however, is that the program needs safeguards. Where Congress declined to adopt them, NTIA would do well to implement them. Obviously, this will be something short of full APA rulemaking, but the NTIA will need to make accountability and reliability a top priority to ensure that the digital divide is substantially closed.

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

PHOTO: C-Span

Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints. 

Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.

This is a critical time for the FTC. It has lost a number of high-profile lawsuits and is preparing to expand its rulemaking powers to regulate things like employment contracts and businesses’ use of data. Khan has also argued in favor of additional rulemaking powers around “unfair methods of competition.”

As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee. 

The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined. 

In fact, the bill’s scope is so broad that some have argued that the FTC simply would not challenge “innocuous self-preferencing” like, say, Apple pre-installing Apple Music on iPhones. Economist Hal Singer has defended the proposals on the grounds that, “Due to limited resources, not all platform integration will be challenged.” 

But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively. 

Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.

The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.

This approach is common in countries with weak rule of law. Anti-corruption laws are frequently used to punish opponents of the regime in China, who probably are also corrupt, but are prosecuted because they have challenged the regime in some way. Hong Kong’s National Security law has also been used to target peaceful protestors and critical media thanks to its vague and overly broad drafting. 

Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’

The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC. 

This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.

Khan, as a critic of the Consumer Welfare Standard, believes that antitrust ought to be used to pursue non-economic objectives, including “the dispersion of political and economic control.” She, and the FTC under her, may wish to use this discretionary power to prosecute firms that she feels are hurting society for unrelated reasons, such as because of political stances they have (or have not) taken.

Khan’s fellow commissioner, Rebecca Kelly Slaughter, has argued that antitrust should be “antiracist”; that “as long as Black-owned businesses and Black consumers are systematically underrepresented and disadvantaged, we know our markets are not fair”; and that the FTC should consider using its existing rulemaking powers to address racist practices. These may be desirable goals, but their application would require contentious value judgements that lawmakers may not want the FTC to make.

Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:

“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]

Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.

It is concerning enough that this legislative package would prohibit conduct that is good for consumers, and that actually increases the competition faced by Big Tech firms. Congress should understand that it also gives extensive discretionary powers to an agency intent on using them to pursue broad, political goals. If Khan’s appointment as chair was a surprise, what her FTC does with the new powers given to her by Congress should not be.

It’s a telecom tale as old as time: industry gets a prime slice of radio spectrum and falls in love with it, only to take it for granted. Then, faced with the reapportionment of that spectrum, it proceeds to fight tooth and nail (and law firm) to maintain the status quo. 

In that way, the decision by the Intelligent Transportation Society of America (ITSA) and the American Association of State Highway and Transportation Officials (AASHTO) to seek judicial review of the Federal Communications Commission’s (FCC) order reassigning the 5.9GHz band was right out of central casting. But rather than simply asserting that the FCC’s order was arbitrary, ITSA foreshadowed many of the arguments that it intends to make against the order. 

There are three arguments of note, and should ITSA win on the merits of any of those arguments, it would mark a significant departure from the way spectrum is managed in the United States.

First, ITSA asserts that the U.S. Department of Transportation (DOT), by virtue of its role as the nation’s transportation regulator, retains authority to regulate radio spectrum as it pertains to DOT programs, not the FCC. Of course, this notion is absurd on its face. Congress mandated that the FCC act as the exclusive regulator of non-federal uses of wireless. This leaves the FCC free to—in the words of the Communications Act—“encourage the provision of new technologies and services to the public” and to “provide to all Americans” the best communications networks possible. 

In contrast, other federal agencies with some amount of allocated spectrum each focus exclusively on a particular mission, without regard to the broader concerns of the country (including uses by sister agencies or the states). That’s why, rather than allocate the spectrum directly to DOT, the statute directs the FCC to consider allocating spectrum for Intelligent Transportation Systems and to establish the rules for their spectrum use. The statute directs the FCC to consult with the DOT, but leaves final decisions to the FCC.

Today’s crowded airwaves make it impossible to allocate spectrum for 5G, Wi-Fi 6, and other innovative uses without somehow impacting spectrum used by a federal agency. Accepting the ITSA position would fundamentally alter the FCC’s role relative to other agencies with an interest in the disposition of spectrum, rendering the FCC a vestigial regulatory backwater subject to non-expert veto. As a matter of policy, this would effectively prevent the United States from meeting the growing challenges of our exponentially increasing demand for wireless access. 

It would also put us at a tremendous disadvantage relative to other countries.  International coordination of wireless policy has become critical in the global economy, with our global supply chains and wireless equipment manufacturers dependent on global standards to drive economies of scale and interoperability around the globe. At the last World Radio Conference in 2019, interagency spectrum squabbling significantly undermined the U.S. negotiation efforts. If agencies actually had veto power over the FCC’s spectrum decisions, the United States would have no way to create a coherent negotiating position, let alone to advocate effectively for our national interests.   

Second, though relatedly, ITSA asserts that the FCC’s engineers failed to appropriately evaluate safety impacts and interference concerns. It’s hard to see how this could be the case, given both the massive engineering record and the FCC’s globally recognized expertise in spectrum. As a general rule, the FCC leads the world in spectrum engineering (there is a reason things like mobile service and Wi-Fi started in the United States). No other federal agency (including DOT) has such extensive, varied, and lengthy experience with interference analysis. This allows the FCC to develop broadly applicable standards to protect all emergency communications. Every emergency first responder relies on this expertise every day that they use wireless communications to save lives. Here again, we see the wisdom in Congress delegating to a single expert agency the task of finding the right balance to meet all our wireless public-safety needs.

Third, the petition ambitiously asks the court to set aside all parts of the order, with the exception of the one portion that ITSA likes: freeing the top 30MHz of the band for use by C-V2X on a permanent basis. Given their other arguments, this assertion strains credulity. Either the FCC makes the decisions, or the DOT does. Giving federal agencies veto power over FCC decisions would be bad enough. Allowing litigants to play federal agencies against each other so they can mix and match results would produce chaos and/or paralysis in spectrum policy.

In short, ITSA is asking the court to fundamentally redefine the scope of FCC authority to administer spectrum when other federal agencies are involved; to undermine deference owed to FCC experts; and to do all of this while also holding that the FCC was correct on the one part of the order with which the complainants agree. This would make future progress in wireless technology effectively impossible.

We don’t let individual states decide which side of the road to drive on, or whether red or some other color traffic light means stop, because traffic rules only work when everybody follows the same rules. Wireless policy can only work if one agency makes the rules. Congress says that agency is the FCC. The courts (and other agencies) need to remember that.

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

One of the themes that has run throughout this symposium has been that, throughout his tenure as both a commissioner and as chairman, Ajit Pai has brought consistency and careful analysis to the Federal Communications Commission (McDowell, Wright). The reflections offered by the various authors in this symposium make one thing clear: the next administration would do well to learn from the considered, bipartisan, and transparent approach to policy that characterized Chairman Pai’s tenure at the FCC.

The following are some of the more specific lessons that can be learned from Chairman Pai. In an important sense, he laid the groundwork for his successful chairmanship when he was still a minority commissioner. His thoughtful dissents were rooted in consistent, clear policy arguments—a practice that both charted how he would look at future issues as chairman and would help the public to understand exactly how he would approach new challenges before the FCC (McDowell, Wright).

One of the most public instances of Chairman Pai’s consistency (and, as it turns out, his bravery) was with respect to net neutrality. From his dissent in the Title II Order, through his commission’s Restoring Internet Freedom Order, Chairman Pai focused on the actual welfare of consumers and the factors that drive network growth and adoption. As Brent Skorup noted, “Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition.” The result of giving in to the Title II advocates would have been to draw the FCC into a quagmire of mass-media regulation that would ultimately harm free expression and broadband deployment in the United States.

Chairman Pai’s vision worked out (Skorup, May, Manne, Hazlett). Despite prognostications of the “death of the internet” because of the Restoring Internet Freedom Order, available evidence suggests that industry investment grew over Chairman Pai’s term. More Americans are connected to broadband than ever before.

Relatedly, Chairman Pai was a strong supporter of liberalizing media-ownership rules that long had been rooted in 20th century notions of competition (Manne). Such rules systematically make it harder for smaller media outlets to compete with large news aggregators and social-media platforms. As Geoffrey Manne notes: 

Consistent with his unwavering commitment to promote media competition… Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers.

This was a bold move for Chairman Pai—in essence, he permitted more local concentration by, e.g., allowing the purchase of a newspaper by a local television station that previously would have been forbidden. By allowing such combinations, the FCC enabled failing local news outlets to shore up their losses and continue to compete against larger, better-resourced organizations. The rule changes are in a case pending before the Supreme Court; should the court find for the FCC, the competitive outlook for local media looks much better thanks to Chairman Pai’s vision.

Chairman Pai’s record on spectrum is likewise impressive (Cooper, Hazlett). The FCC’s auctions under Chairman Pai raised more money and freed more spectrum for higher value uses than any previous commission (Feld, Hazlett). But there is also a lesson in how subsequent administrations can continue what Chairman Pai started. Unlicensed use, for instance, is not free or costless in its maintenance, and Tom Hazlett believes that there is more work to be done in further liberalizing access to the related spectrum—liberalizing in the sense of allowing property rights and market processes to guide spectrum to its highest use:

The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models.

And to a large extent this is the model that Chairman Pai set down, from the issuance of the 12 GHZ NPRM to consider whether those spectrum bands could be opened up for wireless use, to the L-Band Order, where the commission worked hard to reallocate spectrum rights in ways that would facilitate more productive uses.

The controversial L-Band Order was another example of where Chairman Pai displayed both political acumen as well as an apolitical focus on improving spectrum policy (Cooper). Political opposition was sharp and focused after the commission finalized its order in April 2020. Nonetheless, Chairman Pai was deftly able to shepherd the L-Band Order and guarantee that important spectrum was made available for commercial wireless use.

As a native of Kansas, rural broadband rollout ranked highly in the list of priorities at the Pai FCC, and his work over the last four years is demonstrative of this pride of place (Hurwitz, Wright). As Gus Hurwitz notes, “the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity.”

Further, other work, like the recently completed Rural Digital Opportunity Fund auction and the 5G fund provide the necessary policy framework with which to extend greater connectivity to rural America. As Josh Wright notes, “Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind.” This focus on closing the digital divide yielded gains in connectivity in places outside of traditional rural American settings, such as tribal lands, the U.S. Virgin Islands, and Puerto Rico (Wright).

But perhaps one of Chairman Pai’s best and (hopefully) most lasting contributions will be de-politicizing the FCC and increasing the transparency with which it operated. In contrast to previous administrations, the Pai FCC had an overwhelmingly bipartisan nature, with many bipartisan votes being regularly taken at monthly meetings (Jamison). In important respects, it was this bipartisan (or nonpartisan) nature that was directly implicated by Chairman Pai championing the Office of Economics and Analytics at the commission. As many of the commentators have noted (Jamison, Hazlett, Wright, Ellig) the OEA was a step forward in nonpolitical, careful cost-benefit analysis at the commission. As Wright notes, Chairman Pai was careful to not just hire a bunch of economists, but rather to learn from other agencies that have better integrated economics, and to establish a structure that would enable the commission’s economists to materially contribute to better policy.

We were honored to receive a post from Jerry Ellig just a day before he tragically passed away. As chief economist at the FCC from 2017-2018, he was in a unique position to evaluate past practice and participate in the creation of the OEA. According to Ellig, past practice tended to treat the work of the commission’s economists as a post-hoc gloss on the work of the agency’s attorneys. Once conclusions were reached, economics would often be backfilled in to support those conclusions. With the establishment of the OEA, economics took a front-seat role, with staff of that office becoming a primary source for information and policy analysis before conclusions were reached. As Wright noted, the Federal Trade Commission had adopted this approach. With the FCC moving to do this as well, communications policy in the United States is on much sounder footing thanks to Chairman Pai.

Not only did Chairman Pai push the commission in the direction of nonpolitical, sound economic analysis but, as many commentators note, he significantly improved the process at the commission (Cooper, Jamison, Lyons). Chief among his contributions was making it a practice to publish proposed orders weeks in advance, breaking with past traditions of secrecy around draft orders, and thereby giving the public an opportunity to see what the commission intended to do.

Critics of Chairman Pai’s approach to transparency feared that allowing more public view into the process would chill negotiations between the commissioners behind the scenes. But as Daniel Lyons notes, the chairman’s approach was a smashing success:

The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan compared to 33% and 69.9%, respectively, under Chairman Wheeler.

Other reforms from Chairman Pai helped open the FCC to greater scrutiny and a more transparent process, including limiting editorial privileges on staff on an order’s text, and by introducing the use of a simple “fact sheet” to explain orders (Lyons).

I found one of the most interesting insights into the character of Chairman Pai, was his willingness to reverse course and take risks to ensure that the FCC promoted innovation instead of obstructing it by relying on received wisdom (Nachbar). For instance, although he was initially skeptical of the prospects of Space X to introduce broadband through its low-Earth-orbit satellite systems, under Chairman Pai, the Starlink beta program was included in the RDOF auction. It is not clear whether this was a good bet, Thomas Nachbar notes, but it was a statement both of the chairman’s willingness to change his mind, as well as to not allow policy to remain in a comfortable zone that excludes potential innovation.

The next chair has an awfully big pair of shoes (or one oversized coffee mug) to fill. Chairman Pai established an important legacy of transparency and process improvement, as well as commitment to careful, economic analysis in the business of the agency. We will all be well-served if future commissions follow in his footsteps.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas W. Hazlett is the H.H. Macaulay Endowed Professor of Economics at Clemson University.]

Disclosure: The one time I met Ajit Pai was when he presented a comment on my book, “The Political Spectrum,” at a Cato Institute forum in 2018. He was gracious, thorough, and complimentary. He said that while he had enjoyed the volume, he hoped not to appear in upcoming editions. I took that to imply that he read the book as harshly critical of the Federal Communications Commission. Well, when merited, I concede. But it left me to wonder if he had followed my story to its end, as I document the success of reforms launched in recent decades and advocate their extension. Inclusion in a future edition might work out well for a chairman’s legacy. Or…

While my comment here focuses on radio-spectrum allocation, there was a notable reform achieved during the Pai FCC that touches on the subject, even if far more general in scope. In January 2018, the commission voted to initiate an Office of Economics and Analytics.[1] The organizational change was expeditiously instituted that same year, with the new unit stood up under the leadership of FCC economist Giulia McHenry.[2]  I long proposed an FCC “Office of Economic Analysis” on the grounds that it had a reasonable prospect of improving evidence-based policymaking, allowing cost-benefit calculations to be made in a more professional, independent, and less political context.[3]  I welcome this initiative by the Pai FCC and look forward to the empirical test now underway.[4] 

Big Picture

Spectrum policy had notable triumphs under Chairman Pai but was—as President Carter dubbed the Vietnam War—an “incomplete success.” The main cause for celebration was the campaign to push spectrum-access rights into the marketplace. Pai’s public position was straightforward: “Our spectrum strategy calls for making low-band, mid-band, and high-band airwaves available for flexible use,” he wrote in an FCC blog post on June 19, 2018. But the means used by regulators to pursue that policy agenda repeatedly, historically prove determinative. The Pai FCC traveled pathways both effective and ineffective, and we should learn from either. The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models. The traditional spectrum-allocation approach is to permit exactly what the FCC finds to be the best use of spectrum, but this assumes knowledge about the value of alternatives the regulator does not possess. Moreover, it assumes away the costs of regulators imposing their solutions over and above a competitive process that might have less direction but more freedom. In a 2017 notice, the FCC displayed the progress we have made in departing from administrative control, when it sought guidance from private sector commenters this way:

“Are there opportunities to incentivize relocation or repacking of incumbent licensees to make spectrum available for flexible broadband use?

We seek comment on whether auctions … could be used to increase the availability of flexible use spectrum?”

By focusing on how rights—not markets—should be structured, the FCC may side-step useless food fights and let social progress flow.[5]

Progress

Spectrum-allocation results were realized. Indeed, when one looks at the pattern in licensed and unlicensed allocations for “flexible use” under 10 GHz, the recent four-year interval coincides with generous increases, both absolutely and from trend. See Figure 1. These data feature expansions in bandwidth via liberal licenses that include 70 MHz for CBRS (3.5 GHz band), with rights assigned in Auction 105 (2020), and 280 MHz (3.7 – 3.98 GHz) assigned in Auction 107 (2020-21, soon to conclude). The 70 MHz added via Auction 1002 (600 MHz) in 2017 was accounted for during the previous FCC, but substantial bandwidth in Auctions 101, 102, and 103 was added in the millimeter wave bands (not shown in Figure 1, which focuses on low- and mid-band rights).[6]  Meanwhile, multiple increments of unlicensed spectrum allocations were made in 2020: 30 MHz shifted from the Intelligent Transportation Services set-aside (5.9 GHz) in 2020, 80 MHz of CBRS in 2020, and 1,200 MHz (6 GHz) dedicated to Wi-Fi type services in 2020.[7]  Substantial millimeter wave frequency space was previously set aside for unlicensed operations in 2016.[8]

Source: FCC and author’s calculations.

First, that’s not the elephant in the room. Auction 107 has assigned licenses allocated 280 MHz of flexible-use mid-band spectrum, producing at least $94 billion in gross bids (of which about $13 billion will be paid to incumbent satellite licensees to reconfigure their operations so as to occupy just 200 MHz, rather than 500 MHz, of the 3.7 – 4.2 GHz band).[9]  This crushes previous FCC sales; indeed, it constitutes about 42% of all auction receipts:

  • FCC auction receipts, 1994-2019: $117 billion[10]
  • FCC auction receipts, 2020 (Auctions 103 and 105): $12.1 billion
  • FCC auction winning bids, 2020 (Auction 107): $94 billion (gross bids including relocation costs, incentive payments, and before Assignment Phase payments)

The addition of the 280 MHz to existing flexible-use spectrum suitable for mobile (aka, Commercial Mobile Radio Services – CMRS) is the largest increment ever released. It will compose about one-fourth of the low- and mid-band frequencies available via liberal licenses. This constitutes a huge advance with respect to 5G deployments, but going much further—promoting competition, innovation in apps and devices, the Internet of Things, and pushing the technological envelope toward 6G and beyond. Notably, the U.S. has uniquely led this foray to a new frontier in spectrum allocation.

The FCC deserves praise for pushing this proceeding to fruition. So, here it is. The C-Band is a very big deal and a major policy success. And more: in Auction 107, the commission very wisely sold overlay rights. It did not wait for administrative procedures to reconfigure wireless use, tightly supervising new “sharing” of the band, but (a) accepted the incumbents’ basic strategy for reallocation, (b) sold new prospective rights to high bidders, subject to protection of incumbents, (c) used a fraction of proceeds to fund incumbents cooperating with the reallocation, plussing-up payments when hitting deadlines, and (d) implicitly relied on the new licensees to push the relocation process forward.

Challenges

It is interesting that the FCC sort of articulated this useful model, and sort of did not:

For a successful public auction of overlay licenses in the 3.7-3.98 GHz band, bidders need to know before an auction commences when they will get access to that currently occupied spectrum as well as the costs they will incur as a condition of their overlay license. (FCC C-Band Order [Feb. 7, 2020], par. 110)

A germ of truth, but note: Auction 107 also demonstrated just the reverse. Rights were sold prior to clearing the airwaves and bidders—while liable for “incentive payments”—do not know with certainty when the frequencies will be available for their use. Risk is embedded, as it is widely in financial assets (corporate equity shares are efficiently traded despite wide disagreement on future earnings), and yet markets perform. Indeed, the “certainty” approach touted by the FCC in their language about a “successful public auction” has long deterred efficient reallocations, as the incumbents’ exiting process holds up arrival of the entrants. The central feature of the C-Band reallocation was not to create certainty, but to embed an overlay approach into the process. This draws incumbents and entrants together into positive-sum transactions (mediated by the FCC are party-to-party) where they cooperate to create new productive opportunities, sharing the gains.  

The inspiration for the C-Band reallocation of satellite spectrum was bottom-up. As with so much of the radio spectrum, the band devoted to satellite distribution of video (relays to and from an array of broadcast and cable TV systems and networks) was old and tired. For decades, applications and systems were locked in by law. They consumed lots of bandwidth while ignoring the emergence of newer technologies like fiber optics (emphasis to underscore that products launched in the 1980s are still cutting-edge challenges for 2021 Spectrum Policy). Spying this mismatch, and seeking gains from trade, creative risk-takers petitioned the FCC.

In a mid-2017 request, computer chipmaker Intel and C-Band satellite carrier Intelsat (no corporate relationship) joined forces to ask for permission to expand the scope of satellite licenses. The proffered plan was for license holders to invest in spectrum economies by upgrading satellites and earth stations—magically creating new, unoccupied channels in prime mid-band frequencies perfect for highly valuable 5G services. All existing video transport services would continue, while society would enjoy way more advanced wireless broadband. All regulators had to do was allow “change of use” in existing licenses. Markets would do the rest: satellite operators would make efficient multi-billion-dollar investments, coordinating with each other and their customers, and then take bids from new users itching to access the prime 4 GHz spectrum. The transition to bold, new, more valuable applications would compensate legacy customers and service providers.

This “spectrum sharing” can spin gold – seizing on capitalist discovery and demand revelation in market bargains. Voila, the 21st century, delivered.

Well, yes and no. At first, the FCC filing was a yawner, the standard bureaucratic response. But this one took off took off when Chairman Pai—alertly, and in the public interest—embraced the proposal, putting it on the July 12, 2018 FCC meeting agenda. Intelsat’s market cap jumped from about $500 million to over $4.5 billion—the value of the spectrum it was using was worth far more than the service it was providing, and the prospect that it might realize some substantial fraction of the resource revaluation was visible evidence.[11] 

While the Pai FCC leaned in the proper policy direction, politics soon blew the process down. Congress denounced the “private auction” as a “windfall,” bellowing against the unfairness of allowing corporations (some foreign-owned!) to cash out. The populist message was upside-down. The social damage created by mismanagement of spectrum—millions of Americans paying more and getting less from wireless than otherwise, robbing ordinary citizens of vast consumer surplus—was being fixed by entrepreneurial initiative. Moreover, the public gains (lower prices plus innovation externalities spun off from liberated bandwidth) was undoubtedly far greater than any rents captured by the incumbent licensees. And a great bonus to spur future progress: rewards for those parties initiating and securing efficiency-enhancing rights will unleash vastly more productive activity.

But the populist winds—gale force and bipartisan—spun the FCC.

It was legally correct that Intelsat and its rival satellite carriers did not own the spectrum allocated to the C-Band. Indeed, that was root of the problem. And here’s a fatal catch: in applying for broader spectrum property rights, they revealed a valuable discovery. The FCC, posing as referee, turned competitor and appropriated the proffered business plan on behalf of its client (the U.S. government), and then auctioned it to bidders. Regulators did tip the incumbents, whose help was still needed in reorganizing the C-Band, setting $3.3 billion as a fair price for “moving costs” (changing out technology to reduce their transmission footprints) and dangled another $9.7 billion in “incentive payments” not to dilly dally. In total, carriers have bid some $93.9 billion, or $1.02 per MHz-Pop.[12] This is 4.7 times the price paid for the Priority Access Licenses (PALs) allocated 70 MHz in Auction 105 earlier in 2020.

The TOTM assignment was not to evaluate Ajit Pai but to evaluate the Pai FCC and its spectrum policies. On that scale, great value was delivered by the Intel-Intelsat proposal, and the FCC’s alert endorsement, offset in some measure by the long-term losses that will likely flow from the dirigiste retreat to fossilized spectrum rights controlled by diktat.

Sharing Nicely

And that takes us to 2020’s Auction 105 (Citizens Broadband Radio Services, CBRS). The U.S. has lagged much of the world in allocating flexible-use spectrum rights in the 3.5 GHz band. Ireland auctioned rights to use 350 MHz in May 2017 and many countries did likewise between then and 2020, distributing far more than the 70 MHz allocated to the Priority Access Licenses (PALs); 150 MHz to 390 MHz is the range. The Pai FCC can plausibly assign the lag to “preexisting conditions.” Here, however, I will stress that the Pai FCC did not substantially further our understanding of the costs of “spectrum sharing” under coordinating devices imposed by the FCC.

All commercially valuable spectrum bands are shared. The most intensely shared, in the relevant economic sense, are those bands curated by mobile carriers. These frequencies are complemented by extensive network capital supplied by investors, and permit millions of users—including international roamers—to gain seamless connectivity. Unlicensed bands, alternatively, tend to separate users spatially, powering down devices to localize footprints. These limits work better in situations where users desire short transmissions, like a Bluetooth link from iPhone to headphone or when bits can be handed off to a wide area network by hopping 60 feet to a local “hot spot.” The application of “spectrum sharing” to imply a non-exclusive (or unlicensed) rights regime is, at best, highly misleading. Whenever conditions of scarcity exist, meaning that not all uses can be accommodated without conflict, some rationing follows. It is commonly done by price, behavioral restriction, or both.

In CBRS, the FCC has imposed three layers of “priority” access across the 3550-3700 MHz band. Certain government radars are assumed to be fixed and must be protected. When in use, these systems demand other wireless services stay silent on particular channels. Next in line are PAL owners, parties which have paid for exclusivity, but which are not guaranteed access to a given channel. These rights, which sold for about $4.5 billion, are allocated dynamically by a controller (a Spectrum Access System, or SAS). The radios and networks used automatically and continuously check in to obtain spectrum space permissions. Seven PALs, allocated 10 MHz each, have been assigned, 70 MHz in total. Finally, General Access Authorizations (GAA) are given without limit or exclusivity to radio devices across the 80 MHz remaining in the band plus any PALs not in use. Some 5G phones are already equipped to use such bands on an unlicensed basis.

We shall see how the U.S. system works in comparison to alternatives. What is important to note is that the particular form of “spectrum sharing” is neither necessary nor free. As is standard outside the U.S., exclusive rights analogous to CMRS licenses could have been auctioned here, with U.S. government radars given vested rights.

One point that is routinely missed is that the decision to have the U.S. government partition the rights in three layers immediately conceded that U.S. government priority applications (for radar) would never shift. That is asserted as though it is a proposition that needs no justification, but it is precisely the sort of impediment to efficiency that has plagued spectrum reallocations for decades. It was, for instance, the 2002 assumption behind TV “white spaces”—that 402 MHz of TV Band frequencies was fixed in place, that the unused channels could never be repackaged and sold as exclusive rights and diverted to higher-valued uses. That unexamined assertion was boldly run then, as seen in the reduction of the band from 402 MHz to 235 MHz following Auctions 73 (2008) and 1001/1002 (2016-17), as well as in the clear possibility that remaining TV broadcasts could today be entirely transferred to cable, satellite, and OTT broadband (as they have already, effectively, been). The problem in CBRS is that the rights now distributed for the 80 MHz of unlicensed, with its protections of certain priority services, does not sprinkle the proper rights into the market such that positive-sum transitions can be negotiated. We’re stuck with whatever inefficiencies this “preexisting condition” of the 3.5 GHz might endow, unless another decadelong FCC spectrum allocation can move things forward.[13]

Already visible is that the rights sold as PALs in CBRS are only about 20% of the value of rights sold in the C-Band. This differential reflects the power restrictions and overhead costs embedded in the FCC’s sharing rules for CBRS (involving dynamic allocation of the exclusive access rights conveyed in PALs) but avoided in C-Band. In the latter, the sharing arrangements are delegated to the licensees. Their owners reveal that they see these rights as more productive, with opportunities to host more services.

There should be greater recognition of the relevant trade-offs in imposing coexistence rules. Yet, the Pai FCC succumbed in 5.9 GHz and in the 6 GHz bands to the tried-and-true options of Regulation Past. This was hugely ironic in the former, where the FCC had in 1999 imposed unlicensed access under rules that favored specific automotive informatics—Dedicated Short-Range Communications (DSRC)—that proved a 20-year bust. In diagnosing this policy blunder, the FCC then repeated it, splitting off a 45 MHz band with Wi-Fi-friendly unlicensed rules, and leaving 30 MHz to continue as the 1999 set-aside for DSRC. A liberalization of rights that would have allowed for a “private auction” to change the use of the band would have been the preferred approach. Instead, we are left with a partition of the band into rival rule regimes again established by administrative fiat.

This approach was then again imposed in the large 1.2 GHz unlicensed allocation surrounding 6 GHz, making a big 2020 splash. The FCC here assumed, categorically, that unlicensed rules are the best way to sponsor spectrum coordination. It ignores the costs of that coordination. And the commission appears to forget the progress it has made with innovative policy solutions, pulling in market forces through “overlay” licenses. These useful devices were used, in one form or another, to reallocate spectrum in for 2G in Auction 4, AWS in Auction 66, millimeter bands in Auctions 102 and 103, the “TV Incentive Auction,” the satellite C-Band in Auction 107, and have recently appeared as star players in the January 2021 FCC plan to rationalize the complex mix of rights scattered around the 2.5 GHz band.[14]  Too complicated for administrators to figure out, it could be transactionally more efficient to let market competitors figure this out.

The Future

The re-allocations in 5.9 GHz and the 6 GHz bands may yet host productive services. One can hope. But how will regulators know that the options allowed, and taken, are superior to what alternatives—suppressed by law for the next five, 10, 20 years—might have emerged had competitors had the right to test business models or technologies disfavored by the regulators best laid plans. That is the thinking that locked in the TV band, the C-Band for Satellites, and the ITS Band. It’s what we learned to be problematic throughout the Political Radio Spectrum. We shall see, as Chairman Pai speculated, what future chapters these decisions leave for future editions.


[1]   https://www.fcc.gov/document/fcc-votes-establish-office-economics-analytics-0

[2]   https://www.fcc.gov/document/fcc-opens-office-economics-and-analytics

[3]   Thomas Hazlett, Economic Analysis at the Federal Communications Commission: A Simple Proposal to Atone for Past Sins, Resources for the Future Discussion Paper 11-23(May 2011);David Honig, FCC Reorganization: How Replacing Silos with Functional Organization Would Advance Civil Rights, 3 University of Pennsylvania Journal of Law and Public Affairs 18 (Aug. 2018). 

[4] It is with great sadness that Jerry Ellig, the 2017-18 FCC Chief Economist who might well offer the most careful analysis of such a structural reform, will not be available for the task – one which he had already begun, writing this recent essay with two other FCC Chief Economists: Babette Boliek, Jerry Ellig and Jeff Prince, Improved economic analysis should be lasting part of Pai’s FCC legacy, The Hill (Dec. 29, 2020).  Jerry’s sudden passing, on January 21, 2021, is a deep tragedy.  Our family weeps for his wonderful wife, Sandy, and his precious daughter, Kat. 

[5]  As argued in: Thomas Hazlett, “The best way for the FCC to enable a 5G future,” Reuters (Jan. 17, 2018).

[6]  In 2018-19, FCC Auctions 101 and 102 offered licenses allocated 1,550 MHz of bandwidth in the 24 GHz and 28 GHz bands, although some of the bandwidth had previously been assigned and post-auction confusion over interference with adjacent frequency uses (in 24 GHz) has impeded some deployments.  In 2020, Auction 103 allowed competitive bidding for licenses to use 37, 39, and 47 GHz frequencies, 3400 MHz in aggregate.  Net proceeds to the FCC in 101, 102 and 103 were:  $700.3 million, $2.02 billion, and $7.56 billion, respectively.

[7]   I estimate that some 70 MHz of unlicensed bandwidth, allocated for television white space devices, was reduced pursuant to the Incentive Auction in 2017.  This, however, was baked into spectrum policy prior to the Pai FCC.

[8]   Notably, 64-71 GHz was allocated for unlicensed radio operations in the Spectrum Frontiers proceeding, adjacent to the 57-64 GHz unlicensed bands.  See Use of Spectrum Bands Above 24 GHz For Mobile Radio Services, et al., Report and Order and Further Notice of Proposed Rulemaking, 31 FCC Rcd 8014 (2016), 8064-65, para. 130.

[9]   The revenues reflect bids made in the Clock phase of Auction 107.  An Assignment Phase has yet to occur as of this writing.

[10]  The 2021 FCC Budget request, p. 34: “As of December 2019, the total amount collected for broader government use and deficit reduction since 1994 exceeds $117 billion.” 

[11]   Kerrisdale Management issued a June 2018 report that tied the proceeding to a dubious source: “to the market-oriented perspective on spectrum regulation – as articulated, for instance, by the recently published book The Political Spectrum by former FCC chief economist Thomas Winslow Hazlett – [that] the original sin of the FCC was attempting to dictate from on high what licensees should or shouldn’t do with their spectrum. By locking certain bands into certain uses, with no simple mechanism for change or renegotiation, the agency guaranteed that, as soon as technological and commercial realities shifted – as they do constantly – spectrum use would become inefficient.” 

[12]   Net proceeds will be reduced to reflect bidding credits extended small businesses, but additional bids will be received in the Assignment Phase of Auction 107, still to be held. Likely totals will remain somewhere around current levels. 

[13]  The CBRS band is composed of frequencies at 3550-3700 MHz.  The top 50 MHz of that band was officially allocated in 2005 in a proceeding that started years earlier.  It was then curious that the adjacent 100 MHz was not included. 

[14] FCC Seeks Comment on Procedures for 2.5 GHz Reallocation (Jan. 13, 2021).

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Geoffrey A. Manne is the president and founder of the International Center for Law and Economics.]

I’m delighted to add my comments to the chorus of voices honoring Ajit Pai’s remarkable tenure at the Federal Communications Commission. I’ve known Ajit longer than most. We were classmates in law school … let’s just say “many” years ago. Among the other symposium contributors I know of only one—fellow classmate, Tom Nachbar—who can make a similar claim. I wish I could say this gives me special insight into his motivations, his actions, and the significance of his accomplishments, but really it means only that I have endured his dad jokes and interminable pop-culture references longer than most. 

But I can say this: Ajit has always stood out as a genuinely humble, unfailingly gregarious, relentlessly curious, and remarkably intelligent human being, and he deployed these characteristics to great success at the FCC.   

Ajit’s tenure at the FCC was marked by an abiding appreciation for the importance of competition, both as a guiding principle for new regulations and as a touchstone to determine when to challenge existing ones. As others have noted (and as we have written elsewhere), that approach was reflected significantly in the commission’s Restoring Internet Freedom Order, which made competition—and competition enforcement by the antitrust agencies—the centerpiece of the agency’s approach to net neutrality. But I would argue that perhaps Chairman Pai’s greatest contribution to bringing competition to the forefront of the FCC’s mandate came in his work on media modernization.

Fairly early in his tenure at the commission, Ajit raised concerns with the FCC’s failure to modernize its media-ownership rules. In response to the FCC’s belated effort to initiate the required 2010 and 2014 Quadrennial Reviews of those rules, then-Commissioner Pai noted that the commission had abdicated its responsibility under the statute to promote competition. Not only was the FCC proposing to maintain a host of outdated existing rules, but it was also moving to impose further constraints (through new limitations on the use of Joint Sales Agreements (JSAs)). As Ajit noted, such an approach was antithetical to competition:

In smaller markets, the choice is not between two stations entering into a JSA and those same two stations flourishing while operating completely independently. Rather, the choice is between two stations entering into a JSA and at least one of those stations’ viability being threatened. If stations in these smaller markets are to survive and provide many of the same services as television stations in larger markets, they must cut costs. And JSAs are a vital mechanism for doing that.

The efficiencies created by JSAs are not a luxury in today’s digital age. They are necessary, as local broadcasters face fierce competition for viewers and advertisers.

Under then-Chairman Tom Wheeler, the commission voted to adopt the Quadrennial Review in 2016, issuing rules that largely maintained the status quo and, at best, paid tepid lip service to the massive changes in the competitive landscape. As Ajit wrote in dissent:

The changes to the media marketplace since the FCC adopted the Newspaper-Broadcast Cross-Ownership Rule in 1975 have been revolutionary…. Yet, instead of repealing the Newspaper-Broadcast Cross-Ownership Rule to account for the massive changes in how Americans receive news and information, we cling to it.

And over the near-decade since the FCC last finished a “quadrennial” review, the video marketplace has transformed dramatically…. Yet, instead of loosening the Local Television Ownership Rule to account for the increasing competition to broadcast television stations, we actually tighten that regulation.

And instead of updating the Local Radio Ownership Rule, the Radio-Television Cross-Ownership Rule, and the Dual Network Rule, we merely rubber-stamp them.

The more the media marketplace changes, the more the FCC’s media regulations stay the same.

As Ajit also accurately noted at the time:

Soon, I expect outside parties to deliver us to the denouement: a decisive round of judicial review. I hope that the court that reviews this sad and total abdication of the administrative function finds, once and for all, that our media ownership rules can no longer stay stuck in the 1970s consistent with the Administrative Procedure Act, the Communications Act, and common sense. The regulations discussed above are as timely as “rabbit ears,” and it’s about time they go the way of those relics of the broadcast world. I am hopeful that the intervention of the judicial branch will bring us into the digital age.

And, indeed, just this week the case was argued before the Supreme Court.

In the interim, however, Ajit became Chairman of the FCC. And in his first year in that capacity, he took up a reconsideration of the 2016 Order. This 2017 Order on Reconsideration is the one that finally came before the Supreme Court. 

Consistent with his unwavering commitment to promote media competition—and no longer a minority commissioner shouting into the wind—Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers:

Today we end the 2010/2014 Quadrennial Review proceeding. In doing so, the Commission not only acknowledges the dynamic nature of the media marketplace, but takes concrete steps to update its broadcast ownership rules to reflect reality…. In this Order on Reconsideration, we refuse to ignore the changed landscape and the mandates of Section 202(h), and we deliver on the Commission’s promise to adopt broadcast ownership rules that reflect the present, not the past. Because of our actions today to relax and eliminate outdated rules, broadcasters and local newspapers will at last be given a greater opportunity to compete and thrive in the vibrant and fast-changing media marketplace. And in the end, it is consumers that will benefit, as broadcast stations and newspapers—those media outlets most committed to serving their local communities—will be better able to invest in local news and public interest programming and improve their overall service to those communities.

Ajit’s approach was certainly deregulatory. But more importantly, it was realistic, well-reasoned, and responsive to changing economic circumstances. Unlike most of his predecessors, Ajit was unwilling to accede to the torpor of repeated judicial remands (on dubious legal grounds, as we noted in our amicus brief urging the Court to grant certiorari in the case), permitting facially and wildly outdated rules to persist in the face of massive and obvious economic change. 

Like Ajit, I am not one to advocate regulatory action lightly, especially in the (all-too-rare) face of judicial review that suggests an agency has exceeded its discretion. But in this case, the need for dramatic rule change—here, to deregulate—was undeniable. The only abuse of discretion was on the part of the court, not the agency. As we put it in our amicus brief:

[T]he panel vacated these vital reforms based on mere speculation that they would hinder minority and female ownership, rather than grounding its action on any record evidence of such an effect. In fact, the 2017 Reconsideration Order makes clear that the FCC found no evidence in the record supporting the court’s speculative concern.

…In rejecting the FCC’s stated reasons for repealing or modifying the rules, absent any evidence in the record to the contrary, the panel substituted its own speculative concerns for the judgment of the FCC, notwithstanding the FCC’s decades of experience regulating the broadcast and newspaper industries. By so doing, the panel exceeded the bounds of its judicial review powers under the APA.

Key to Ajit’s conclusion that competition in local media markets could be furthered by permitting more concentration was his awareness that the relevant market for analysis couldn’t be limited to traditional media outlets like broadcasters and newspapers; it must include the likes of cable networks, streaming video providers, and social-media platforms, as well. As Ajit put it in a recent speech:

The problem is a fundamental refusal to grapple with today’s marketplace: what the service market is, who the competitors are, and the like. When assessing competition, some in Washington are so obsessed with the numerator, so to speak—the size of a particular company, for instance—that they’ve completely ignored the explosion of the denominator—the full range of alternatives in media today, many of which didn’t exist a few years ago.

When determining a particular company’s market share, a candid assessment of the denominator should include far more than just broadcast networks or cable channels. From any perspective (economic, legal, or policy), it should include any kinds of media consumption that consumers consider to be substitutes. That could be TV. It could be radio. It could be cable. It could be streaming. It could be social media. It could be gaming. It could be still something else. The touchstone of that denominator should be “what content do people choose today?”, not “what content did people choose in 1975 or 1992, and how can we artificially constrict our inquiry today to match that?”

For some reason, this simple and seemingly undeniable conception of the market escapes virtually all critics of Ajit’s media-modernization agenda. Indeed, even Justice Stephen Breyer in this week’s oral argument seemed baffled by the notion that more concentration could entail more competition:

JUSTICE BREYER: I’m thinking of it solely as a — the anti-merger part, in — in anti-merger law, merger law generally, I think, has a theory, and the theory is, beyond a certain point and other things being equal, you have fewer companies in a market, the harder it is to enter, and it’s particularly harder for smaller firms. And, here, smaller firms are heavily correlated or more likely to be correlated with women and minorities. All right?

The opposite view, which is what the FCC has now chosen, is — is they want to move or allow to be moved towards more concentration. So what’s the theory that that wouldn’t hurt the minorities and women or smaller businesses? What’s the theory the opposite way, in other words? I’m not asking for data. I’m asking for a theory.

Of course, as Justice Breyer should surely know—and as I know Ajit Pai knows—counting the number of firms in a market is a horrible way to determine its competitiveness. In this case, the competition from internet media platforms, particularly for advertising dollars, is immense. A regulatory regime that prohibits traditional local-media outlets from forging efficient joint ventures or from obtaining the scale necessary to compete with those platforms does not further competition. Even if such a rule might temporarily result in more media outlets, eventually it would result in no media outlets, other than the large online platforms. The basic theory behind the Reconsideration Order—to answer Justice Breyer—is that outdated government regulation imposes artificial constraints on the ability of local media to adopt the organizational structures necessary to compete. Removing those constraints may not prove a magic bullet that saves local broadcasters and newspapers, but allowing the rules to remain absolutely ensures their demise. 

Ajit’s commitment to furthering competition in telecommunications markets remained steadfast throughout his tenure at the FCC. From opposing restrictive revisions to the agency’s spectrum screen to dissenting from the effort to impose a poorly conceived and retrograde regulatory regime on set-top boxes, to challenging the agency’s abuse of its merger review authority to impose ultra vires regulations, to, of course, rolling back his predecessor’s unsupportable Title II approach to net neutrality—and on virtually every issue in between—Ajit sought at every turn to create a regulatory backdrop conducive to competition.

Tom Wheeler, Pai’s predecessor at the FCC, claimed that his personal mantra was “competition, competition, competition.” His greatest legacy, in that regard, was in turning over the agency to Ajit.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas B. Nachbar is a professor of law at the University of Virginia School of Law and a senior fellow at the Center for National Security Law.]

It would be impossible to describe Ajit Pai’s tenure as chair of the Federal Communications Commission as ordinary. Whether or not you thought his regulatory style or his policies were innovative, his relationship with the public has been singular for an FCC chair. His Reese’s mug, alone, has occupied more space in the American media landscape than practically any past FCC chair. From his first day, he has attracted consistent, highly visible criticism from a variety of media outlets, although at least John Oliver didn’t describe him as a dingo. Just today, I read that Ajit Pai single handedly ruined the internet, which when I got up this morning seemed to be working pretty much the same way it was four years ago.

I might be biased in my view of Ajit. I’ve known him since we were law school classmates, when he displayed the same zeal and good-humored delight in confronting hard problems that I’ve seen in him at the commission. So I offer my comments not as an academic and student of FCC regulation, but rather as an observer of the communications regulatory ecosystem that Ajit has dominated since his appointment. And while I do not agree with everything he’s done at the commission, I have admired his single-minded determination to pursue policies that he believes will expand access to advanced telecommunications services. One can disagree with how he’s pursued that goal—and many have—but characterizing his time as chair in any other way simply misses the point. Ajit has kept his eye on expanding access, and he has been unwavering in pursuit of that objective, even when doing so has opened him to criticism, which is the definition of taking political risk.

Thus, while I don’t think it’s going to be the most notable policy he’s participated in at the commission, I would like to look at Ajit’s tenure through the lens of one small part of one fairly specific proceeding: the commission’s decision to include SpaceX as a low-latency provider in the Rural Digital Opportunity Fund (RDOF) Auction.

The decision to include SpaceX is at one level unremarkable. SpaceX proposes to offer broadband internet access through low-Earth-orbit satellites, which is the kind of thing that is completely amazing but is becoming increasingly un-amazing as communications technology advances. SpaceX’s decision to use satellites is particularly valuable for initiatives like the RDOF, which specifically seek to provide services where previous (largely terrestrial) services have not. That is, in fact, the whole point of the RDOF, a point that sparked fiery debate over the FCC’s decision to focus the first phase of the RDOF on areas with no service rather than areas with some service. Indeed, if anything typifies the current tenor of the debate (at the center of which Ajit Pai has resided since his confirmation as chair), it is that a policy decision over which kind of under-served areas should receive more than $16 billion in federal funding should spark such strongly held views. In the end, SpaceX was awarded $885.5 million to participate in the RDOF, almost 10% of the first-round funds awarded.

But on a different level, the decision to include SpaceX is extremely remarkable. Elon Musk, SpaceX’s pot-smoking CEO, does not exactly fit regulatory stereotypes. (Disclaimer: I personally trust Elon Musk enough to drive my children around in one of his cars.) Even more significantly, SpaceX’s Starlink broadband service doesn’t actually exist as a commercial product. If you go to Starlink’s website, you won’t find a set of splashy webpages featuring products, services, testimonials, and a variety of service plans eager for a monthly assignation with your credit card or bank account. You will be greeted with a page asking for your email and service address in case you’d like to participate in Starlink’s beta program. In the case of my address, which is approximately 100 miles from the building where the FCC awarded SpaceX over $885 million to participate in the RDOF, Starlink is not yet available. I will, however, “be notified via email when service becomes available in your area,” which is reassuring but doesn’t get me any closer to watching cat videos.

That is perhaps why Chairman Pai was initially opposed to including SpaceX in the low-latency portion of the RDOF. SpaceX was offering unproven technology and previous satellite offerings had been high-latency, which is good for some uses but not others.

But then, an even more remarkable thing happened, at least in Washington: a regulator at the center of a controversial issue changed his mind and—even more remarkably—admitted his decision might not work out. When the final order was released, SpaceX was allowed to bid for low-latency RDOF funds even though the commission was “skeptical” of SpaceX’s ability to deliver on its low-latency promise. Many doubted that SpaceX would be able to effectively compete for funds, but as we now know, that decision led to SpaceX receiving a large share of the Phase I funds. Of course, that means that if SpaceX doesn’t deliver on its latency promises, a substantial part of the RDOF Phase I funds will fail to achieve their purpose, and the FCC will have backed the wrong horse.

I think we are unlikely to see such regulatory risk-taking, both technically and politically, in what will almost certainly be a more politically attuned commission in the coming years. Even less likely will be acknowledgments of uncertainty in the commission’s policies. Given the political climate and the popular attention policies like network neutrality have attracted, I would expect the next chair’s views about topics like network neutrality to exhibit more unwavering certainty than curiosity and more resolve than risk-taking. The most defining characteristic of modern communications technology and markets is change. We are all better off with a commission in which the other things that can change are minds.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Jerry Ellig was a research professor at The George Washington University Regulatory Studies Center and served as chief economist at the Federal Communications Commission from 2017 to 2018. Tragically, he passed away Jan. 20, 2021. TOTM is honored to publish his contribution to this symposium.]

One significant aspect of Chairman Ajit Pai’s legacy is not a policy change, but an organizational one: establishment of the Federal Communications Commission’s (FCC’s) Office of Economics and Analytics (OEA) in 2018.

Prior to OEA, most of the FCC’s economists were assigned to the various policy bureaus, such as Wireless, Wireline Competition, Public Safety, Media, and International. Each of these bureaus had its own chief economist, but the rank-and-file economists reported to the managers who ran the bureaus – usually attorneys who also developed policy and wrote regulations. In the words of former FCC Chief Economist Thomas Hazlett, the FCC had “no location anywhere in the organizational structure devoted primarily to economic analysis.”

Establishment of OEA involved four significant changes. First, most of the FCC’s economists (along with data strategists and auction specialists) are now grouped together into an organization separate from the policy bureaus, and they are managed by other economists. Second, the FCC rules establishing the new office tasked OEA with reviewing every rulemaking, reviewing every other item with economic content that comes before the commission for a vote, and preparing a full benefit-cost analysis for any regulation with $100 million or more in annual economic impact. Third, a joint memo from the FCC’s Office of General Counsel and OEA specifies that economists are to be involved in the early stages of all rulemakings. Fourth, the memo also indicates that FCC regulatory analysis should follow the principles articulated in Executive Order 12866 and Office of Management and Budget Circular A-4 (while specifying that the FCC, as an independent agency, is not bound by the executive order).

While this structure for managing economists was new for the FCC, it is hardly uncommon in federal regulatory agencies. Numerous independent agencies that deal with economic regulation house their economists in a separate bureau or office, including the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Surface Transportation Board, the Office of Comptroller of the Currency, and the Federal Trade Commission. The SEC displays even more parallels with the FCC. A guidance memo adopted in 2012 by the SEC’s Office of General Counsel and Division of Risk, Strategy and Financial Innovation (the name of the division where economists and other analysts were located) specifies that economists are to be involved in the early stages of all rulemakings and articulates best analytical practices based on Executive Order 12866 and Circular A-4.

A separate economics office offers several advantages over the FCC’s prior approach. It gives the economists greater freedom to offer frank advice, enables them to conduct higher-quality analysis more consistent with the norms of their profession, and may ultimately make it easier to uphold FCC rules that are challenged in court.

Independence.  When I served as chief economist at the FCC in 2017-2018, I gathered from conversations that the most common practice in the past was for attorneys who wrote rules to turn to economists for supporting analysis after key decisions had already been made. This was not always the process, but it often occurred. The internal working group of senior FCC career staff who drafted the plan for OEA reached similar conclusions. After the establishment of OEA, an FCC economist I interviewed noted how his role had changed: “My job used to be to support the policy decisions made in the chairman’s office. Now I’m much freer to speak my own mind.”

Ensuring economists’ independence is not a problem unique to the FCC. In a 2017 study, Stuart Shapiro found that most of the high-level economists he interviewed who worked on regulatory impact analyses in federal agencies perceive that economists can be more objective if they are located outside the program office that develops the regulations they are analyzing. As one put it, “It’s very difficult to conduct a BCA [benefit-cost analysis] if our boss wrote what you are analyzing.” Interviews with senior economists and non-economists who work on regulation that I conducted for an Administrative Conference of the United States project in 2019 revealed similar conclusions across federal agencies. Economists located in organizations separate from the program office said that structure gave them greater independence and ability to develop better analytical methodologies. On the other hand, economists located in program offices said they experienced or knew of instances where they were pressured or told to produce an analysis with the results decision-makers wanted.

The FTC provides an informative case study. From 1955-1961, many of the FTC’s economists reported to the attorneys who conducted antitrust cases; in 1961, they were moved into a separate Bureau of Economics. Fritz Mueller, the FTC chief economist responsible for moving the antitrust economists back into the Bureau of Economics, noted that they were originally placed under the antitrust attorneys because the attorneys wanted more control over the economic analysis. A 2015 evaluation by the FTC’s Inspector General concluded that the Bureau of Economics’ existence as a separate organization improves its ability to offer “unbiased and sound economic analysis to support decision-making.”

Higher-quality analysis. An issue closely related to economists’ independence is the quality of the economic analysis. Executive branch regulatory economists interviewed by Richard Williams expressed concern that the economic analysis was more likely to be changed to support decisions when the economists are located in the program office that writes the regulations. More generally, a study that Catherine Konieczny and I conducted while we were at the FCC found that executive branch agencies are more likely to produce higher-quality regulatory impact analyses if the economists responsible for the analysis are in an independent economics office rather than the program office.

Upholding regulations in court. In Michigan v. EPA, the Supreme Court held that it is unreasonable for agencies to refuse to consider regulatory costs if the authorizing statute does not prohibit them from doing so. This precedent will likely increase judicial expectations that agencies will consider economic issues when they issue regulations. The FCC’s OGC-OEA memo cites examples of cases where the quality of the FCC’s economic analysis either helped or harmed the commission’s ability to survive legal challenge under the Administrative Procedure Act’s “arbitrary and capricious” standard. More systematically, a recent Regulatory Studies Center working paper finds that a higher-quality economic analysis accompanying a regulation reduces the likelihood that courts will strike down the regulation, provided that the agency explains how it used the analysis in decisions.

Two potential disadvantages of a separate economics office are that it may make the economists easier to ignore (what former FCC Chief Economist Tim Brennan calls the “Siberia effect”) and may lead the economists to produce research that is less relevant to the practical policy concerns of the policymaking bureaus. The FCC’s reorganization plan took these disadvantages seriously.

To ensure that the ultimate decision-makers—the commissioners—have access to the economists’ analysis and recommendations, the rules establishing the office give OEA explicit responsibility for reviewing all items with economic content that come before the commission. Each item is accompanied by a cover memo that indicates whether OEA believes there are any significant issues, and whether they have been dealt with adequately. To ensure that economists and policy bureaus work together from the outset of regulatory initiatives, the OGC-OEA memo instructs:

Bureaus and Offices should, to the extent practicable, coordinate with OEA in the early stages of all Commission-level and major Bureau-level proceedings that are likely to draw scrutiny due to their economic impact. Such coordination will help promote productive communication and avoid delays from the need to incorporate additional analysis or other content late in the drafting process. In the earliest stages of the rulemaking process, economists and related staff will work with programmatic staff to help frame key questions, which may include drafting options memos with the lead Bureau or Office.

While presiding over his final commission meeting on Jan. 13, Pai commented, “It’s second nature now for all of us to ask, ‘What do the economists think?’” The real test of this institutional innovation will be whether that practice continues under a new chair in the next administration.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Daniel Lyons is a professor of law at Boston College Law School and a visiting fellow at the American Enterprise Institute.]

For many, the chairmanship of Ajit Pai is notable for its many headline-grabbing substantive achievements, including the Restoring Internet Freedom order, 5G deployment, and rural buildout—many of which have been or will be discussed in this symposium. But that conversation is incomplete without also acknowledging Pai’s careful attention to the basic blocking and tackling of running a telecom agency. The last four years at the Federal Communications Commission were marked by small but significant improvements in how the commission functions, and few are more important than the chairman’s commitment to transparency.

Draft Orders: The Dark Ages Before 2017

This commitment is most notable in Pai’s revisions to the open meeting process. From time immemorial, the FCC chairman would set the agenda for the agency’s monthly meeting by circulating draft orders to the other commissioners three weeks in advance. But the public was deliberately excluded from that distribution list. During this period, the commissioners would read proposals, negotiate revisions behind the scenes, then meet publicly to vote on final agency action. But only after the meeting—often several days later—would the actual text of the order be made public.

The opacity of this process had several adverse consequences. Most obviously, the public lacked details about the substance of the commission’s deliberations. The Government in the Sunshine Act requires the agency’s meetings to be made public so the American people know what their government is doing. But without the text of the orders under consideration, the public had only a superficial understanding of what was happening each month. The process was reminiscent of House Speaker Nancy Pelosi’s famous gaffe that Congress needed to “pass the [Affordable Care Act] bill so that you can find out what’s in it.” During the high-profile deliberations over the Open Internet Order in 2015, then-Commissioner Pai made significant hay over this secrecy, repeatedly posting pictures of himself with the 300-plus-page order on Twitter with captions such as “I wish the public could see what’s inside” and “the public still can’t see it.”

Other consequences were less apparent, but more detrimental. Because the public lacked detail about key initiatives, the telecom media cycle could be manipulated by strategic leaks designed to shape the final vote. As then-Commissioner Pai testified to Congress in 2016:

[T]he public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Sometimes, this secrecy backfired on the chairman, such as when net-neutrality advocates used media pressure to shape the 2014 Open Internet NPRM. Then-Chairman Tom Wheeler’s proposed order sought to follow the roadmap laid out by the D.C. Circuit’s Verizon decision, which relied on Title I to prevent ISPs from blocking content or acting in a “commercially unreasonable manner.” Proponents of a more aggressive Title II approach leaked these details to the media in a negative light, prompting tech journalists and advocates to unleash a wave of criticism alleging the chairman was “killing off net neutrality to…let the big broadband providers double charge.” In full damage control mode, Wheeler attempted to “set the record straight” about “a great deal of misinformation that has recently surfaced regarding” the draft order. But the tempest created by these leaks continued, pressuring Wheeler into adding a Title II option to the NPRM—which, of course, became the basis of the 2015 final rule.

This secrecy also harmed agency bipartisanship, as minority commissioners sometimes felt as much in the dark as the general public. As Wheeler scrambled to address Title II advocates’ concerns, he reportedly shared revised drafts with fellow Democrats but did not circulate the final draft to Republicans until less than 48 hours before the vote—leading Pai to remark cheekily that “when it comes to the Chairman’s latest net neutrality proposal, the Democratic Commissioners are in the fast lane and the Republican Commissioners apparently are being throttled.” Similarly, Pai complained during the 2014 spectrum screen proceeding that “I was not provided a final version of the item until 11:50 p.m. the night before the vote and it was a substantially different document with substantively revised reasoning than the one that was previously circulated.”

Letting the Sunshine In

Eliminating this culture of secrecy was one of Pai’s first decisions as chairman. Less than a month after assuming the reins at the agency, he announced that the FCC would publish all draft items at the same time they are circulated to commissioners, typically three weeks before each monthly meeting. While this move was largely applauded, some were concerned that this transparency would hamper the agency’s operations. One critic suggested that pre-meeting publication would hamper negotiations among commissioners: “Usually, drafts created negotiating room…Now the chairman’s negotiating position looks like a final position, which undercuts negotiating ability.” Another, while supportive of the change, was concerned that the need to put a draft order in final form well before a meeting might add “a month or more to the FCC’s rulemaking adoption process.”

Fortunately, these concerns proved to be unfounded. The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan—compared to 33% and 69.9%, respectively, under Chairman Wheeler. 

This increased transparency also improved the overall quality of the agency’s work product. In a 2018 speech before the Free State Foundation, Commissioner Mike O’Rielly explained that “drafts are now more complete and more polished prior to the public reveal, so edits prior to the meeting are coming from Commissioners, as opposed to there being last minute changes—or rewrites—from staff or the Office of General Counsel.” Publishing draft orders in advance allows the public to flag potential issues for revision before the meeting, which improves the quality of the final draft and reduces the risk of successful post-meeting challenges via motions for reconsideration or petitions for judicial review. O’Rielly went on to note that the agency seemed to be running more efficiently as well, as “[m]eetings are targeted to specific issues, unnecessary discussions of non-existent issues have been eliminated, [and] conversations are more productive.”

Other Reforms

While pre-meeting publication was the most visible improvement to agency transparency, there are other initiatives also worth mentioning.

  • Limiting Editorial Privileges: Chairman Pai dramatically limited “editorial privileges,” a longtime tradition that allowed agency staff to make changes to an order’s text even after the final vote. Under Pai, editorial privileges were limited to technical and conforming edits only; substantive changes were not permitted unless they were proposed directly by a commissioner and only in response to new arguments offered by a dissenting commissioner. This reduces the likelihood of a significant change being introduced outside the public eye.
  • Fact Sheet: Adopting a suggestion of Commissioner Mignon Clyburn, Pai made it a practice to preface each published draft order with a one-page fact sheet that summarized the item in lay terms, as much as possible. This made the agency’s monthly work more accessible and transparent to members of the public who lacked the time to wade through the full text of each draft order.
  • Online Transparency Dashboard: Pai also launched an online dashboard on the agency’s website. This dashboard offers metrics on the number of items currently pending at the commission by category, as well as quarterly trends over time.
  • Restricting Comment on Upcoming Items: As a gesture of respect to fellow commissioners, Pai committed that the chairman’s office would not brief the press or members of the public, or publish a blog, about an upcoming matter before it was shared with other commissioners. This was another step toward reducing the strategic use of leaks or selective access to guide the tech media news cycle.

And while it’s technically not a transparency reform, Pai also deserves credit for his willingness to engage the public as the face of the agency. He was the first FCC commissioner to join Twitter, and throughout his chairmanship he maintained an active social media presence that helped personalize the agency and make it more accessible. His commitment to this channel is all the more impressive when one considers the way some opponents used these platforms to hurl a steady stream of hateful, often violent and racist invective at him during his tenure.

Pai deserves tremendous credit for spearheading these efforts to bring the agency out of the shadows and into the sunlight. Of course, he was not working alone. Pai shares credit with other commissioners and staff who supported transparency and worked to bring these policies to fruition, most notably former Commissioner O’Rielly, who beat a steady drum for process reform throughout his tenure.

We do not yet know who President Joe Biden will appoint as Pai’s successor. It is fair to assume that whomever is chosen will seek to put his or her own stamp on the agency. But let’s hope that enhanced transparency and the other process reforms enacted over the past four years remain a staple of agency practice moving forward. They may not be flashy, but they may prove to be the most significant and long-lasting impact of the Pai chairmanship.