Archives For Broadband

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Justin “Gus” Hurwitz is associate professor of law, the Menard Director of the Nebraska Governance and Technology Center, and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska College of Law. He is also director of law & economics programs at the International Center for Law & Economics.]

I was having a conversation recently with a fellow denizen of rural America, discussing how to create opportunities for academics studying the digital divide to get on-the-ground experience with the realities of rural telecommunications. He recounted a story from a telecom policy event in Washington, D.C., from not long ago. The story featured a couple of well-known participants in federal telecom policy as they were talking about how to close the rural digital divide. The punchline of the story was loud speculation from someone in attendance that neither of these bloviating telecom experts had likely ever set foot in a rural town.

And thus it is with most of those who debate and make telecom policy. The technical and business challenges of connecting rural America are different. Rural America needs different things out of its infrastructure than urban America. And the attitudes of both users and those providing service are different here than they are in urban America.

Federal Communications Commission Chairman Aji Pai—as I get to refer to him in writing for perhaps the last time—gets this. As is well-known, he is a native Kansan. He likely spent more time during his time as chairman driving rural roads than this predecessor spent hobnobbing at political fundraisers. I had the opportunity on one of these trips to visit a Nebraska farm with him. He was constantly running a bit behind schedule on this trip. I can attest that this is because he would wander off with a farmer to look at a combine or talk about how they were using drones to survey their fields. And for those cynics out there—I know there are some who don’t believe in the chairman’s interest in rural America—I can tell you that it meant a lot to those on the ground who had the chance to share their experiences.

Rural Digital Divide Policy on the Ground

Closing the rural digital divide is a defining public-policy challenge of telecommunications. It’s right there in the first sentence of the Communications Act, which established the FCC:

For the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States…a rapid, efficient, Nation-wide, and world-wide wire and radio communication service[.]

Depending on how one defines broadband internet, somewhere between 18 and 35 million Americans lack broadband internet access. No matter how you define it, however, most of those lacking access are in rural America.

It’s unsurprising why this is the case. Looking at North Dakota, South Dakota, and Nebraska—three of the five most expensive states to connect each household in both the 2015 and 2018 Connect America Fund models—the cost to connect a household to the internet in these states was twice that of connecting a household in the rest of the United States. Given the low density of households in these areas, often less than one household per square mile, there are relatively fewer economies of scale that allow carriers to amortize these costs across multiple households. We can add that much of rural America is both less wealthy than more urban areas and often doesn’t value the benefits of high-speed internet as highly. Taken together, the cost of providing service in these areas is much higher, and the demand for them much less, than in more urban areas.

On the flip side are the carriers and communities working to provide access. The reality in these states is that connecting those who live here is an all-hands-on-deck exercise. I came to Nebraska with the understanding that cable companies offer internet service via cable and telephone companies offer internet service via DSL or fiber. You can imagine my surprise the first time I spoke to a carrier who was using a mix of cable, DSL, fiber, microwave, and Wi-Fi to offer service to a few hundred customers. And you can also imagine my surprise when he started offering advice to another carrier—ostensibly a competitor—about how to get more performance out of some older equipment. Just last week, I was talking to a mid-size carrier about how they are using fixed wireless to offer service to customers outside of their service area as a stopgap until fiber gets out to the customer’s house.

Pai’s Progress Closing the Rural Digital Divide

This brings us to Chairman Pai’s work to close the rural digital divide. Literally on his first day on the job, he announced that his top priority was closing the digital divide. And he backed this up both with the commission’s agenda and his own time and attention.

On Chairman Pai’s watch, the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity. The recently completed RDOF auction promises to connect 10 million rural Americans to the internet; the 5G Fund will ensure that all but the most difficult-to-connect areas of the country will be covered by 5G mobile wireless. These are top-line items on Commissioner Pai’s resume as chairman. But it is important to recognize how much of a break they were from the commission’s previous approach to universal service and the digital divide. These funding mechanisms are best characterized by their technology-neutral, reverse-auction based approach to supporting service deployment.

This is starkly different from prior generations of funding, which focused on subsidizing specific carriers to provide specific levels of service using specific technologies. As I said above, the reality on the ground in rural America is that closing the digital divide is an all-hands-on-deck exercise. It doesn’t matter who is offering service or what technology they are using. Offering 10 mbps service today over a rusty barbed wire fence or a fixed wireless antenna hanging off the branch of a tree is better than offering no service or promising fiber that’s going to take two years to get into the ground. And every dollar saved by connecting one house with a lower-cost technology is a dollar that can be used to connect another house that may otherwise have gone unconnected.

The combination of the reverse-auction and technology-neutral approaches has made it possible for the commission to secure commitments to connect a record number of houses with high-speed internet over an incredibly short period of time.

Then there are the chairman’s accomplishments on the spectrum and wirelessinternet fronts. Here, he faced resistance from both within the government and industry. In some of the more absurd episodes of government in-fighting, he tangled with protectionist interests within the government to free up CBRS and other mid-band spectrum and to authorize new satellite applications. His support of fixed and satellite wireless has the potential to legitimately shake up the telecom industry. I honestly have no idea whether this is going to prove to be a good or bad bet in the long term—whether fixed wireless is going to be able to offer the quality and speed of service its proponents promise or whether it instead will be a short-run misallocation of capital that will require clawbacks and re-awards of funding in another few years—but the embrace of the technology demonstrated decisive leadership and thawed a too limited and ossified understanding of what technologies could be used to offer service. Again, as said above, closing the rural digital divide is an all-hands-on-deck problem; we do ourselves no favors by excluding possible solutions from our attempts to address it.

There is more that the commission did under Chairman Pai’s leadership, beyond the commission’s obvious order and actions, to close the rural digital divide. Over the past two years, I have had opportunities to work with academic colleagues from other disciplines on a range of federal funding opportunities for research and development relating to next generation technologies to support rural telecommunications, such as programs through the National Science Foundation. It has been wonderful to see increased FCC involvement in these programs. And similarly, another of Chairman Pai’s early initiatives was to establish the Broadband Deployment Advisory Committee. It has been rare over the past few years for me to be in a meeting with rural stakeholders that didn’t also include at least one member of a BDAC subcommittee. The BDAC process was a valuable way to communicate information up the chair, to make sure that rural stakeholders’ voices were heard in D.C.

But the BDAC process had another important effect: it made clear that there was someone in D.C. who was listening. Commissioner Pai said on his first day as chairman that closing the digital divide was his top priority. That’s easy to just say. But establishing a committee framework that ensures that stakeholders regularly engage with an appointed representative of the FCC, putting in the time and miles to linger with a farmer to talk about the upcoming harvest season, these things make that priority real.

Rural America certainly hopes that the next chair of the commission will continue to pay us as much attention as Chairman Pai did. But even if they don’t, we can rest with some comfort that he has set in motion efforts—from the next generation of universal service programs to supporting research that will help develop the technologies that will come after—that will serve us will for years to come.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Brent Skorup is a senior research fellow at the Mercatus Center at George Mason University.]

Ajit Pai came into the Federal Communications Commission chairmanship with a single priority: to improve the coverage, cost, and competitiveness of U.S. broadband for the benefit of consumers. The 5G Fast Plan, the formation of the Broadband Deployment Advisory Committee, the large spectrum auctions, and other broadband infrastructure initiatives over the past four years have resulted in accelerated buildouts and higher-quality services. Millions more Americans have gotten connected because of agency action and industry investment.

That brings us to Chairman Pai’s most important action: restoring the deregulatory stance of the FCC toward broadband services and repealing the Title II “net neutrality” rules in 2018. Had he not done this, his and future FCCs would have been bogged down in inscrutable, never-ending net neutrality debates, reminiscent of the Fairness Doctrine disputes that consumed the agency 50 years ago. By doing that, he cleared the decks for the pro-deployment policies that followed and redirected the agency away from its roots in mass-media policy toward a future where the agency’s primary responsibilities are encouraging broadband deployment and adoption.

It took tremendous courage from Chairman Pai and Commissioners Michael O’Rielly and Brendan Carr to vote to repeal the 2015 Title II regulations, though they probably weren’t prepared for the public reaction to a seemingly arcane dispute over regulatory classification. The hysteria ginned up by net-neutrality advocates, members of Congress, celebrities, and too-credulous journalists was unlike anything I’ve seen in political advocacy. Advocates, of course, don’t intend to provoke disturbed individuals but the irresponsible predictions of “the end of the internet as we know it” and widespread internet service provider (ISP) content blocking drove one man to call in a bomb threat to the FCC, clearing the building in a desperate attempt to delay or derail the FCC’s Title II repeal. At least two other men pleaded guilty to federal charges after issuing vicious death threats to Chairman Pai, a New York congressman, and their families in the run-up to the regulation’s repeal. No public official should have to face anything resembling that over a policy dispute.

For all the furor, net-neutrality advocates promised a neutral internet that never was and never will be. ”Happy little bunny rabbit dreams” is how David Clark of MIT, an early chief protocol architect of the internet, derided the idea of treating all online traffic the same. Relatedly, the no-blocking rule—the sine na qua of net neutrality—was always a legally dubious requirement. Legal scholars for years had called into doubt the constitutionality of imposing must-carry requirements on ISPs. Unsurprisingly, a federal appellate judge pressed this point in oral arguments defending the net neutrality rules in 2016. The Obama FCC attorney conceded without a fight; even after the net neutrality order, ISPs were “absolutely” free to curate the internet.

Chairman Pai recognized that the fight wasn’t about website blocking and it wasn’t, strictly speaking, about net neutrality. This was the latest front in the long battle over whether the FCC should strictly regulate mass-media distribution. There is a long tradition of progressive distrust of new (unregulated) media. The media access movement that pushed for broadcast TV and radio and cable regulations from the 1960s to 1980s never went away, but the terminology has changed: disinformation, net neutrality, hate speech, gatekeeper.

The decline in power of regulated media—broadcast radio and TV—and the rising power of unregulated internet-based media—social media, Netflix, and podcasts—meant that the FCC and Congress had few ways to shape American news and media consumption. In the words of Tim Wu, the law professor who coined the term “net neutrality,” the internet rules are about giving the agency the continuing ability to shape “media policy, social policy, oversight of the political process, [and] issues of free speech.”

Title II was the only tool available to bring this powerful new media—broadband access—under intense regulatory scrutiny by regulators and the political class. As net-neutrality advocate and Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition. Net neutrality would draw the agency into contentious mass-media regulation once again, distracting it from universal service efforts, spectrum access and auctions, and cleaning up the regulatory detritus that had slowly accumulated since the passage of the agency’s guiding statutes: the 1934 Communications Act and the 1996 Telecommunications Act.

There are probably items that Chairman Pai wish he’d finished or had done slightly differently. He’s left a proud legacy, however, and his politically risky decision to repeal the Title II rules redirected agency energies away from no-win net-neutrality battles and toward broadband deployment and infrastructure. Great progress was made and one hopes the Biden FCC chairperson will continue that trajectory that Pai set.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Mark Jamison is the Gerald L. Gunter Memorial Professor and director of the Public Utility Research Center at the University of Florida’s Warrington College of Business. He’s also a visiting scholar at the American Enterprise Institute.]

Chairman Ajit Pai will be remembered as one of the most consequential Federal Communications Commission chairmen in history. His policy accomplishments are numerous, including the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls.

Less will be said about the important work he has done rebuilding the FCC’s independence. It is rare for a new FCC chairman to devote resources to building the institution. Most focus on their policy agendas, because policies and regulations make up their legacies that the media notices, and because time and resources are limited. Chairman Pai did what few have even attempted to do: both build the organization and make significant regulatory reforms.

Independence is the ability of a regulatory institution to operate at arm’s length from the special interests of industry, politicians, and the like. The pressures to bias actions to benefit favored stakeholders can be tremendous; the FCC greatly influences who gets how much of the billions of dollars that are at stake in FCC decisions. But resisting those pressures is critical because investment and services suffer when a weak FCC is directed by political winds or industry pressures rather than law and hard analysis.

Chairman Pai inherited a politicized FCC. Research by Scott Wallsten showed that commission votes had been unusually partisan under the previous chairman (November 2013 through January 2017). From the beginning of Reed Hundt’s term as chairman until November 2013, only 4% of commission votes had divided along party lines. By contrast, 26% of votes divided along party lines from November 2013 until Chairman Pai took over. This division was also reflected in a sharp decline in unanimous votes under the previous administration. Only 47% of FCC votes on orders were unanimous, as opposed to an average of 60% from Hundt through the brief term of Mignon Clyburn.

Chairman Pai and his fellow commissioners worked to heal this divide. According to the FCC’s data, under Chairman Pai, over 80% of items on the monthly meeting agenda had bipartisan support and over 70% were adopted without dissent. This was hard, as Democrats in general were deeply against President Donald Trump and some members of Congress found a divided FCC convenient.

The political orientation of the FCC prior to Chairman Pai was made clear in the management of controversial issues. The agency’s work on net neutrality in 2015 pivoted strongly toward heavy regulation when President Barack Obama released his video supporting Title II regulation of the internet. And there is evidence that the net-neutrality decision was made in the White House, not at the FCC. Agency economists were cut out of internal discussions once the political decision had been made to side with the president, causing the FCC’s chief economist to quip that the decision was an economics-free zone.

On other issues, a vote on Lifeline was delayed several hours so that people on Capitol Hill could lobby a Democratic commissioner to align with fellow Democrats and against the Republican commissioners. And an initiative to regulate set-top boxes was buoyed, not by analyses by FCC staff, but by faulty data and analyses from Democratic senators.

Chairman Pai recognized the danger of politically driven decision-making and noted that it was enabled in part by the agency’s lack of a champion for economic analyses. To remedy this situation, Chairman Pai proposed forming an Office of Economics and Analytics (OEA). The commission adopted his proposal, but unfortunately it was with one of the rare party-line votes. Hopefully, Democratic commissioners have learned the value of the OEA.

The OEA has several responsibilities, but those most closely aligned with supporting the agency’s independence are that it: (a) provides economic analysis, including cost-benefit analysis, for commission actions; (b) develops policies and strategies on data resources and best practices for data use; and (c) conducts long-term research. The work of the OEA makes it hard for a politically driven chairman to pretend that his or her initiatives are somehow substantive.

Another institutional weakness at the FCC was a lack of transparency. Prior to Chairman Pai, the public was not allowed to view the text of commission decisions until after they were adopted. Even worse, sometimes the text that the commissioners saw when voting was not the text in the final decision. Wallsten described in his research a situation where the meaning of a vote actually changed from the time of the vote to the release of the text:

On February 9, 2011 the Federal Communications Commission (FCC) released a proposed rule that included, among many other provisions, capping the Universal Service Fund at $4.5 billion. The FCC voted to approve a final order on October 27, 2011. But when the order was finally released on November 18, 2011, the $4.5 billion ceiling had effectively become a floor, with the order requiring the agency to forever estimate demand at no less than $4.5 billion. Because payments from the fund had been decreasing steadily, this floor means that the FCC is now collecting hundreds of billions of dollars more in taxes than it is spending on the program. [footnotes omitted]

The lack of transparency led many to not trust the FCC and encouraged stakeholders with inside access to bypass the legitimate public process for lobbying the agency. This would have encouraged corruption had not Chairman Pai changed the system. He required that decision texts be released to the public at the same time they were released to commissioners. This allows the public to see what the commissioners are voting on. And it ensures that orders do not change after they are voted on.

The FCC demonstrated its independence under Chairman Pai. In the case of net neutrality, the three Republican commissioners withstood personal threats, mocking from congressional Democrats, and pressure from Big Tech to restore light-handed regulation. About a year later, Chairman Pai was strongly criticized by President Trump for rejecting the Sinclair-Tribune merger. And despite the president’s support of the merger, he apparently had sufficient respect for the FCC’s independence that the White House never contacted the FCC about the issue. In the case of Ligado Networks’ use of its radio spectrum license, the FCC stood up to intense pressure from the U.S. Department of Defense and from members of Congress who wanted to substitute their technical judgement for the FCC’s research on the impacts of Ligado’s proposal.

It is possible that a new FCC could undo this new independence. Commissioners could marginalize their economists, take their directions from partisans, and reintroduce the practice of hiding information from the public. But Chairman Pai foresaw this and carefully made his changes part of the institutional structure of the FCC, making any steps backward visible to all concerned.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Harold Feld is senior vice president of Public Knowledge.]

Chairman Ajit Pai prioritized making new spectrum available for 5G. To his credit, he succeeded. Over the course of four years, Chairman Pai made available more high-band and mid-band spectrum, for licensed use and unlicensed use, than any other Federal Communications Commission chairman. He did so in the face of unprecedented opposition from other federal agencies, navigating the chaotic currents of the Trump administration with political acumen and courage. The Pai FCC will go down in history as the 5G FCC, and as the chairman who protected the primacy of FCC control over commercial spectrum policy.

At the same time, the Pai FCC will also go down in history as the most conventional FCC on spectrum policy in the modern era. Chairman Pai undertook no sweeping review of spectrum policy in the manner of former Chairman Michael Powell and no introduction of new and radically different spectrum technologies such as the introduction of unlicensed spectrum and spread spectrum in the 1980s, or the introduction of auctions in the 1990s. To the contrary, Chairman Pai actually rolled back the experimental short-term license structure adopted in the 3.5 GHz Citizens Broadband Radio Service (CBRS) band and replaced it with the conventional long-term with renewal expectation license. He missed a once-in-a-lifetime opportunity to dramatically expand the availability of unlicensed use of the TV white spaces (TVWS) via repacking after the television incentive auction. In reworking the rules for the 2.5 GHz band, although Pai laudably embraced the recommendation to create an application window for rural tribal lands, he rejected the proposal to allow nonprofits a chance to use the band for broadband in favor of conventional auction policy.

Ajit Pai’s Spectrum Policy Gave the US a Strong Position for 5G and Wi-Fi 6

To fully appreciate Chairman Pai’s accomplishments, we must first fully appreciate the urgency of opening new spectrum, and the challenges Pai faced from within the Trump administration itself. While providers can (and should) repurpose spectrum from older technologies to newer technologies, successful widespread deployment can only take place when sufficient amounts of new spectrum become available. This “green field” spectrum allows providers to build out new technologies with the most up-to-date equipment without disrupting existing subscriber services. The protocols developed for mobile 5G services work best with “mid-band” spectrum (generally considered to be frequencies between 2 GHz and 6 GHz). At the time Pai became chairman, the FCC did not have any mid-band spectrum identified for auction.

In addition, spectrum available for unlicensed use has become increasingly congested as more and more services depend on Wi-Fi and other unlicensed applications. Indeed, we have become so dependent on Wi-Fi for home broadband and networking that people routinely talk about buying “Wi-Fi” from commercial broadband providers rather than buying “internet access.” The United States further suffered a serious disadvantage moving forward to next generation Wi-Fi, Wi-Fi 6, because the U.S. lacked a contiguous block of spectrum large enough to take advantage of Wi-Fi 6’s gigabit capabilities. Without gigabit Wi-Fi, Americans will increasingly be unable to use the applications that gigabit broadband to the home makes possible.

But virtually all spectrum—particularly mid-band spectrum—have significant incumbents. These incumbents include federal users, particularly the U.S. Department of Defense. Finding new spectrum optimal for 5G required reclaiming spectrum from these incumbents. Unlicensed services do not require relocating incumbent users but creating such “underlay” unlicensed spectrum access requires rules to prevent unlicensed operations from causing harmful interference to licensed services. Needless to say, incumbent services fiercely resist any change in spectrum-allocation rules, claiming that reducing their spectrum allocation or permitting unlicensed services will compromise valuable existing services, while simultaneously causing harmful interference.

The need to reallocate unprecedented amounts of spectrum to ensure successful 5G and Wi-Fi 6 deployment in the United States created an unholy alliance of powerful incumbents, commercial and federal, dedicated to blocking FCC action. Federal agencies—in violation of established federal spectrum policy—publicly challenged the FCC’s spectrum-allocation decisions. Powerful industry incumbents—such as the auto industry, the power industry, and defense contractors—aggressively lobbied Congress to reverse the FCC’s spectrum action by legislation. The National Telecommunications and Information Agency (NTIA), the federal agency tasked with formulating federal spectrum policy, was missing in action as it rotated among different acting agency heads. As the chair and ranking member of the House Commerce Committee noted, this unprecedented and very public opposition by federal agencies to FCC spectrum policy threatened U.S. wireless interests both domestically and internationally.

Navigating this hostile terrain required Pai to exercise both political acumen and political will. Pai accomplished his goal of reallocating 600 MHz of spectrum for auction, opening over 1200 MHz of contiguous spectrum for unlicensed use, and authorized the new entrant Ligado Networks over the objections of the DOD. He did so by a combination of persuading President Donald Trump of the importance of maintaining U.S. leadership in 5G, and insisting on impeccable analysis by the FCC’s engineers to provide support for the reallocation and underlay decisions. On the most significant votes, Pai secured support (or partial support) from the Democrats. Perhaps most importantly, Pai successfully defended the institutional role of the FCC as the ultimate decisionmaker on commercial spectrum use, not subject to a “heckler’s veto” by other federal agencies.

Missed Innovation, ‘Command and Control Lite

While acknowledging Pai’s accomplishments, a fair consideration of Pai’s legacy must also consider his shortcomings. As chairman, Pai proved the most conservative FCC chair on spectrum policy since the 1980s. The Reagan FCC produced unlicensed and spread spectrum rules. The Clinton FCC created the spectrum auction regime. The Bush FCC included a spectrum task force and produced the concept of database management for unlicensed services, creating the TVWS and laying the groundwork for CBRS in the 3.5 GHz band. The Obama FCC recommended and created the world’s first incentive auction.

The Trump FCC does more than lack comparable accomplishments; it actively rolled back previous innovations. Within the first year of his chairmanship, Pai began a rulemaking designed to roll back the innovative priority access licensing (PALs). Under the rules adopted under the previous chairman, PALs provided exclusive use on a census block basis for three years with no expectation of renewal. Pai delayed the rollout of CBRS for two years to replace this approach with a standard license structure of 10 years with an expectation of renewal, explicitly to facilitate traditional carrier investment in traditional networks. Pai followed the same path when restructuring the 2.5 GHz band. While laudably creating a window for Native Americans to apply for 2.5 GHz licenses on rural tribal lands, Pai rejected proposals from nonprofits to adopt a window for non-commercial providers to offer broadband. Instead, he simply eliminated the educational requirement and adopted a standard auction for distribution of remaining licenses.

Similarly, in the unlicensed space, Pai consistently declined to promote innovation. In the repacking following the broadcast incentive auction, Pai rejected the proposal of structuring the repacking to ensure usable TVWS in every market. Instead, under Pai, the FCC managed the repacking so as to minimize the burden on incumbent primary and secondary licensees. As a result, major markets such as Los Angeles have zero channels available for unlicensed TVWS operation. This effectively relegates the service to a niche rural service, augmenting existing rural wireless ISPs.

The result is a modified form of “command and control,” the now-discredited system where the FCC would allocate licenses to provide specific services such as “FM radio” or “mobile pager service.” While preserving license flexibility in name, the licensing rules are explicitly structured to promote certain types of investment and business cases. The result is to encourage the same types of licensees to offer improved and more powerful versions of the same types of services, while discouraging more radical innovations.

Conclusion

Chairman Pai can rightly take pride in his overall 5G legacy. He preserved the institutional role of the FCC as the agency responsible for expanding our nation’s access to wireless services against sustained attack by federal agencies determined to protect their own spectrum interests. He provided enough green field spectrum for both licensed services and unlicensed services to permit the successful deployment of 5G and Wi-Fi 6. At the same time, however, he failed to encourage more radical spectrum policies that have made the United States the birthplace of such technologies as mobile broadband and Wi-Fi. We have won the “race” to next generation wireless, but the players and services are likely to stay the same.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Randy May is president of the Free State Foundation.]

I am pleased to participate in this retrospective symposium regarding Ajit Pai’s tenure as Federal Communications Commission chairman. I have been closely involved in communications law and policy for nearly 45 years, and, as I’ve said several times since Chairman Pai announced his departure, he will leave as one of the most consequential leaders in the agency’s history. And, I should hastily add, consequential in a positive way, because it’s possible to be consequential in a not-so-positive way.

Chairman Pai’s leadership has been impactful in many different areas—for example, spectrum availability, media deregulation, and institutional reform, to name three—but in this tribute I will focus on his efforts regarding “net neutrality.” I use the quotes because the term has been used by many to mean many different things in many different contexts.

Within a year of becoming chairman, and with the support of fellow Republican commissioners Michael O’Rielly and Brendan Carr, Ajit Pai led the agency in reversing the public utility-like “net neutrality” regulation that had been imposed by the Obama FCC in February 2015 in what became known as the Title II Order. The Title II Order had classified internet service providers (ISPs) as “telecommunications carriers” subject to the same common-carrier regulatory regime imposed on monopolistic Ma Bell during most of the 20th century. While “forbearing” from imposing the full array of traditional common-carrier regulatory mandates, the Title II Order also subjected ISPs to sanctions if they violated an amorphous “general conduct standard,” which provided that ISPs could not “unreasonably” interfere with or disadvantage end users or edge providers like Google, Facebook, and the like.

The aptly styled Restoring Internet Freedom Order (RIF Order), adopted in December 2017, reversed nearly all of the Title II Order’s heavy-handed regulation of ISPs in favor of a light-touch regulatory regime. It was aptly named, because the RIF Order “restored” market “freedom” to internet access regulation that had mostly prevailed since the turn of the 21st century. It’s worth remembering that, in 1999, in opting not to require that newly emerging cable broadband providers be subjected to a public utility-style regime, Clinton-appointee FCC Chairman William Kennard declared: “[T]he alternative is to go to the telephone world…and just pick up this whole morass of regulation and dump it wholesale on the cable pipe. That is not good for America.” And worth recalling, too, that in 2002, the commission, under the leadership of Chairman Michael Powell, determined that “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.”

It was this reliance on market freedom that was “restored” under Ajit Pai’s leadership. In an appearance at a Free State Foundation event in December 2016, barely a month before becoming chairman, then-Commissioner Pai declared: “It is time to fire up the weed whacker and remove those rules that are holding back investment, innovation, and job creation.” And he added: “Proof of market failure should guide the next commission’s consideration of new regulations.” True to his word, the weed whacker was used to cut down the public utility regime imposed on ISPs by his predecessor. And the lack of proof of any demonstrable market failure was at the core of the RIF Order’s reasoning.

It is true that, as a matter of law, the D.C. Circuit’s affirmance of the Restoring Internet Freedom Order in Mozilla v. FCC rested heavily on the application by the court of Chevron deference, just as it is true that Chevron deference played a central role in the affirmance of the Title II Order and the Brand X decision before that. And it would be disingenuous to suggest that, if a newly reconstituted Biden FCC reinstitutes a public utility-like regulatory regime for ISPs, that Chevron deference won’t once again play a central role in the appeal.

But optimist that I am, and focusing not on what possibly may be done as a matter of law, but on what ought to be done as a matter of policy, the “new” FCC should leave in place the RIF Order’s light-touch regulatory regime. In affirming most of the RIF Order in Mozilla, the D.C. Circuit agreed there was substantial evidence supporting the commission’s predictive judgment that reclassification of ISPs “away from public-utility style regulation” was “likely to increase ISP investment and output.” And the court agreed there was substantial evidence to support the commission’s position that such regulation is especially inapt for “a dynamic industry built on technological development and disruption.”

Indeed, the evidence has only become more substantial since the RIF Order’s adoption. Here are only a few factual snippets: According to CTIA, wireless-industry investment for 2019 grew to $29.1 billion, up from $27.4 billion in 2018 and $25.6 billion in 2017USTelecom estimates that wireline broadband ISPs invested approximately $80 billion in network infrastructure in 2018, up more than $3.1 billion from $76.9 billion in 2017. And total investment most likely increased in 2019 for wireline ISPs like it did for wireless ISPs. Figures cited in the FCC’s 2020 Broadband Deployment Report indicate that fiber broadband networks reached an additional 6.5 million homes in 2019, a 16% increase over the prior year and the largest single-year increase ever

Additionally, more Americans have access to broadband internet access services, and at ever higher speeds. According to an April 2020 report by USTelecom, for example, gigabit internet service is available to at least 85% of U.S. homes, compared to only 6% of U.S. homes three-and-a-half years ago. In an October 2020 blog post, Chairman Pai observed that “average download speeds for fixed broadband in the United States have doubled, increasing by over 99%” since the RIF Order was adopted. Ookla Speedtests similarly show significant gains in mobile wireless speeds, climbing to 47/10 Mbps in September 2020 compared to 27/8 Mbps in the first half of 2018.

More evidentiary support could be offered regarding the positive results that followed adoption of the RIF Order, and I assume in the coming year it will be. But the import of abandonment of public utility-like regulation of ISPs should be clear.

There is certainly much that Ajit Pai, the first-generation son of immigrants who came to America seeking opportunity in the freedom it offered, accomplished during his tenure. To my way of thinking, “Restoring Internet Freedom” ranks at—or at least near—the top of the list.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

Rolled by Rewheel, Redux

Eric Fruits —  15 December 2020

The Finnish consultancy Rewheel periodically issues reports using mobile wireless pricing information to make claims about which countries’ markets are competitive and which are not. For example, Rewheel claims Canada and Greece have the “least competitive monthly prices” while the United Kingdom and Finland have the most competitive.

Rewheel often claims that the number of carriers operating in a country is the key determinant of wireless pricing. 

Their pricing studies attract a great deal of attention. For example, in February 2019 testimony before the U.S. House Energy and Commerce Committee, Phillip Berenbroick of Public Knowledge asserted: “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.” So, what’s wrong with Rewheel? An earlier post highlights some of the flaws in Rewheel’s methodology. But there’s more.

Rewheel creates fictional market baskets of mobile plans for each provider in a county. Country-by-country comparisons are made by evaluating the lowest-priced basket for each country and the basket with the median price.

Rewheel’s market baskets are hypothetical packages that say nothing about which plans are actually chosen by consumers or what the actual prices paid by those consumers were. This is not a new criticism. In 2014, Pauline Affeldt and Rainer Nitsche called these measures “meaningless”:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr … Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

For example, reporting that the average price of a certain T-Mobile USA smartphone, tablet and home Internet plan is $125 is about as useless as knowing that the average price of a Kroger shopping cart containing a six-pack of Budweiser, a dozen eggs, and a pound of oranges is $10. Is Safeway less “competitive” if the price of the same cart of goods is $12? What could you say about pricing at a store that doesn’t sell Budweiser (e.g., Trader Joe’s)?

Rewheel solves that last problem by doing something bonkers. If a carrier doesn’t offer a plan in one of Rewheel’s baskets, they “assign” the HIGHEST monthly price in the world. 

For example, Rewheel notes that Vodafone India does not offer a fixed wireless broadband plan with at least 1,000GB of data and download speeds of 100 Mbps or faster. So, Rewheel “assigns” Vodafone India the highest price in its dataset. That price belongs to a plan that’s sold in the United Kingdom. It simply makes no sense. 

To return to the supermarket analogy, it would be akin to saying that, if a Trader Joe’s in the United States doesn’t sell six-packs of Budweiser, we should assume the price of Budweiser at Trader Joe’s is equal to the world’s most expensive six-pack of the beer. In reality, Trader Joe’s is known for having relatively low prices. But using the Rewheel approach, the store would be assessed to have some of the highest prices.

Because of Rewheel’s “assignment” of highest monthly prices to many plans, it’s irrelevant whether their analysis is based on a country’s median price or lowest price. The median is skewed and the lowest actual may be missing from the dataset.

Rewheel publishes these reports to support its argument that mobile prices are lower in markets with four carriers than in those with three carriers. But even if we accept Rewheel’s price data as reliable, which it isn’t, their own data show no relationship between the number of carriers and average price.

Notice the huge overlap of observations among markets with three and four carriers. 

Rewheel’s latest report provides a redacted dataset, reporting only data usage and weighted average price for each provider. So, we have to work with what we have. 

A simple regression analysis shows there is no statistically significant difference in the intercept or the slopes for markets with three, four or five carriers (the default is three carriers in the regression). Based on the data Rewheel provides to the public, the number of carriers in a country has no relationship to wireless prices.

Rewheel seems to have a rich dataset of pricing information that could be useful to inform policy. It’s a shame that their topline summaries seem designed to support a predetermined conclusion.

Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development. 

Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.

There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move. 

What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets. 

The importance of profit-and-loss to economic calculation

One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace. 

This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs. 

The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.

For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.  

While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs. 

For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit. 

All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.

Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.

Municipal broadband as islands of chaos

Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.

The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.

This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes. 

Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.

But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.

Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate. 

Conclusion

There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.

As Thomas Sowell has noted many times, political debates often involve the use of words which if taken literally mean something very different than the connotations which are conveyed. Examples abound in the debate about broadband buildout. 

There is a general consensus on the need to subsidize aspects of broadband buildout to rural areas in order to close the digital divide. But this real need allows for strategic obfuscation of key terms in this debate by parties hoping to achieve political or competitive gain. 

“Access” and “high-speed broadband”

For instance, nearly everyone would agree that Internet policy should “promote access to high-speed broadband.” But how some academics and activists define “access” and “high-speed broadband” are much different than the average American would expect.

A commonsense definition of access is that consumers have the ability to buy broadband sufficient to meet their needs, considering the costs and benefits they face. In the context of the digital divide between rural and urban areas, the different options available to consumers in each area is a reflection of the very real costs and other challenges of providing service. In rural areas with low population density, it costs broadband providers considerably more per potential subscriber to build the infrastructure needed to provide service. At some point, depending on the technology, it is no longer profitable to build out to the next customer several miles down the road. The options and prices available to rural consumers reflects this unavoidable fact. Holding price constant, there is no doubt that many rural consumers would prefer higher speeds than are currently available to them. But this is not the real-world choice which presents itself. 

But access in this debate instead means the availability of the same broadband options regardless of where people live. Rather than being seen as a reflection of underlying economic realities, the fact that rural Americans do not have the same options available to them that urban Americans do is seen as a problem which calls out for a political solution. Thus, billions of dollars are spent in an attempt to “close the digital divide” by subsidizing broadband providers to build infrastructure to  rural areas. 

“High-speed broadband” similarly has a meaning in this debate significantly different from what many consumers, especially those lacking “high speed” service, expect. For consumers, fast enough is what allows them to use the Internet in the ways they desire. What is fast enough does change over time as more and more uses for the Internet become common. This is why the FCC has changed the technical definition of broadband multiple times over the years as usage patterns and bandwidth requirements change. Currently, the FCC uses 25Mbps down/3 Mbps up as the baseline for broadband.

However, for some, like Jonathan Sallet, this is thoroughly insufficient. In his Broadband for America’s Future: A Vision for the 2020s, he instead proposes “100 Mbps symmetrical service without usage limits.” The disconnect between consumer demand as measured in the marketplace in light of real trade-offs between cost and performance and this arbitrary number is not well-explained in this study. The assumption is simply that faster is better, and that the building of faster networks is a mere engineering issue once sufficiently funded and executed with enough political will.

But there is little evidence that consumers “need” faster Internet than the market is currently providing. In fact, one Wall Street Journal study suggests “typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.” Moreover, there is even less evidence that most consumers or businesses need anything close to upload speeds of 100 Mbps. For even intensive uses like high-resolution live streaming, recommended upload speeds still fall far short of 100 Mbps. 

“Competition” and “Overbuilding”

Similarly, no one objects to the importance of “competition in the broadband marketplace.” But what is meant by this term is subject to vastly different interpretations.

The number of competitors is not the same as the amount of competition. Competition is a process by which market participants discover the best way to serve consumers at lowest cost. Specific markets are often subject to competition not only from the firms which exist within those markets, but also from potential competitors who may enter the market any time potential profits reach a point high enough to justify the costs of entry. An important inference from this is that temporary monopolies, in the sense that one firm has a significant share of the market, is not in itself illegal under antitrust law, even if they are charging monopoly prices. Potential entry is as real in its effects as actual competitors in forcing incumbents to continue to innovate and provide value to consumers. 

However, many assume the best way to encourage competition in broadband buildout is to simply promote more competitors. A significant portion of Broadband for America’s Future emphasizes the importance of subsidizing new competition in order to increase buildout, increase quality, and bring down prices. In particular, Sallet emphasizes the benefits of municipal broadband, i.e. when local governments build and run their own networks. 

In fact, Sallet argues that fears of “overbuilding” are really just fears of competition by incumbent broadband ISPs:

Language here is important. There is a tendency to call the construction of new, competitive networks in a locality with an existing network “overbuilding”—as if it were an unnecessary thing, a useless piece of engineering. But what some call “overbuilding” should be called by a more familiar term: “Competition.” “Overbuilding” is an engineering concept; “competition” is an economic concept that helps consumers because it shifts the focus from counting broadband networks to counting the dollars that consumers save when they have competitive choices. The difference is fundamental—overbuilding asks whether the dollars spent to build another network are necessary for the delivery of a communications service; economics asks whether spending those dollars will lead to competition that allows consumers to spend less and get more. 

Sallet makes two rhetorical moves here to make his argument. 

The first is redefining “overbuilding,” which refers to literally building a new network on top of (that is, “over”) previously built architecture, as a ploy by ISPs to avoid competition. But this is truly Orwellian. When a new entrant can build over an incumbent and take advantage of the first-mover’s investments to enter at a lower cost, a failure to compensate the first-mover is free riding. If the government compels such free riding, it reduces incentives for firms to make the initial investment to build the infrastructure.

The second is defining competition as the number of competitors, even if those competitors need to be subsidized by the government in order to enter the marketplace.  

But there is no way to determine the “right” number of competitors in a given market in advance. In the real world, markets don’t match blackboard descriptions of perfect competition. In fact, there are sometimes high fixed costs which limit the number of firms which will likely exist in a competitive market. In some markets, known as natural monopolies, high infrastructural costs and other barriers to entry relative to the size of the market lead to a situation where it is cheaper for a monopoly to provide a good or service than multiple firms in a market. But it is important to note that only firms operating under market pressures can assess the viability of competition. This is why there is a significant risk in government subsidizing entry. 

Competition drives sustained investment in the capital-intensive architecture of broadband networks, which suggests that ISPs are not natural monopolies. If they were, then having a monopoly provider regulated by the government to ensure the public interest, or government-run broadband companies, may make sense. In fact, Sallet denies ISPs are natural monopolies, stating that “the history of telecommunications regulation in the United States suggests that monopolies were a result of policy choices, not mandated by any iron law of economics” and “it would be odd for public policy to treat the creation of a monopoly as a success.” 

As noted by economist George Ford in his study, The Impact of Government-Owned Broadband Networks on Private Investment and Consumer Welfare, unlike the threat of entry which often causes incumbents to act competitively even in the absence of competitors, the threat of subsidized entry reduces incentives for private entities to invest in those markets altogether. This includes both the incentive to build the network and update it. Subsidized entry may, in fact, tip the scales from competition that promotes consumer welfare to that which could harm it. If the market only profitably sustains one or two competitors, adding another through municipal broadband or subsidizing a new entrant may reduce the profitability of the incumbent(s) and eventually lead to exit. When this happens, only the government-run or subsidized network may survive because the subsidized entrant is shielded from the market test of profit-and-loss.

The “Donut Hole” Problem

The term “donut hole” is a final example to consider of how words can be used to confuse rather than enlighten in this debate.

There is broad agreement that to generate the positive externalities from universal service, there needs to be subsidies for buildout to high-cost rural areas. However, this seeming agreement masks vastly different approaches. 

For instance, some critics of the current subsidy approach have identified a phenomenon where the city center has multiple competitive ISPs and government policy extends subsidies to ISPs to build out broadband coverage into rural areas, but there is relatively paltry Internet services in between due to a lack of private or public investment. They describe this as a “donut hole” because the “unserved” rural areas receive subsidies while “underserved” outlying parts immediately surrounding town centers receive nothing under current policy.

Conceptually, this is not a donut hole. It is actually more like a target or bullseye, where the city center is served by private investment and the rural areas receive subsidies to be served. 

Indeed, there is a different use of the term donut hole, which describes how public investment in city centers can create a donut hole of funding needed to support rural build-out. Most Internet providers rely on profits from providing lower-cost service to higher-population areas (like city centers) to cross-subsidize the higher cost of providing service in outlying and rural areas. But municipal providers generally only provide municipal service — they only provide lower-cost service. This hits the carriers that serve higher-cost areas with a double whammy. First, every customer that municipal providers take from private carriers cuts the revenue that those carriers rely on to provide service elsewhere. Second, and even more problematic, because the municipal providers have lower costs (because they tend not to serve the higher-cost outlying areas), they can offer lower prices for service. This “competition” exerts downward pressure on the private firms’ prices, further reducing revenue across their entire in-town customer base. 

This version of the “donut hole,” in which the revenues that private firms rely on from the city center to support the costs of providing service to outlying areas has two simultaneous effects. First, it directly reduces the funding available to serve more rural areas. And, second, it increases the average cost of providing service across its network (because it is no longer recovering as much of its costs from the lower-cost city core), which increases the prices that need to be charged to rural users in order to justify offering service at all.

Conclusion

Overcoming the problem of the rural digital divide starts with understanding why it exists. It is simply more expensive to build networks in areas with low population density. If universal service is the goal, subsidies, whether explicit subsidies from government or implicit cross-subsidies by broadband companies, are necessary to build out to these areas. But obfuscations about increasing “access to high-speed broadband” by promoting “competition” shouldn’t control the debate.

Instead, there needs to be a nuanced understanding of how government-subsidized entry into the broadband marketplace can discourage private investment and grow the size of the “donut hole,” thereby leading to demand for even greater subsidies. Policymakers should avoid exacerbating the digital divide by prioritizing subsidized competition over market processes.

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]

Across the globe, millions of people are rapidly coming to terms with the harsh realities of life under lockdown. As governments impose ever-greater social distancing measures, many of the daily comforts we took for granted are no longer available to us. 

And yet, we can all take solace in the knowledge that our current predicament would have been far less tolerable if the COVID-19 outbreak had hit us twenty years ago. Among others, we have Big Tech firms to thank for this silver lining. 

Contrary to the claims of critics, such as Senator Josh Hawley, Big Tech has produced game-changing innovations that dramatically improve our ability to fight COVID-19. 

The previous post in this series showed that innovations produced by Big Tech provide us with critical information, allow us to maintain some level of social interactions (despite living under lockdown), and have enabled companies, universities and schools to continue functioning (albeit at a severely reduced pace).

But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?

One of the most underappreciated ways in which technology (mostly pioneered by Big Tech firms) is helping the world deal with COVID-19 has been a rapid shift towards contactless economic transactions. Not only are consumers turning towards digital goods to fill their spare time, but physical goods (most notably food) are increasingly being exchanged without any direct contact.

These ongoing changes would be impossible without the innovations and infrastructure that have emerged from tech and telecommunications companies over the last couple of decades. 

Of course, the overall picture is still bleak. The shift to contactless transactions has only slightly softened the tremendous blow suffered by the retail and restaurant industries – some predictions suggest their overall revenue could fall by at least 50% in the second quarter of 2020. Nevertheless, as explained below, this situation would likely be significantly worse without the many innovations produced by Big Tech companies. For that we would be thankful.

1. Food and other goods

For a start, the COVID-19 outbreak (and government measures to combat it) has caused many brick & mortar stores and restaurants to shut down. These closures would have been far harder to implement before the advent of online retail and food delivery platforms.

At the time of writing, e-commerce websites already appear to have witnessed a 20-30% increase in sales (other sources report 52% increase, compared to the same time last year). This increase will likely continue in the coming months.

The Amazon Retail platform has been at the forefront of this online shift.

  • Having witnessed a surge in online shopping, Amazon announced that it would be hiring 100.000 distribution workers to cope with the increased demand. Amazon’s staff have also been asked to work overtime in order to meet increased demand (in exchange, Amazon has doubled their pay for overtime hours).
  • To attract these new hires and ensure that existing ones continue working, Amazon simultaneously announced that it would be increasing wages in virus-hit countries (from $15 to $17, in the US) .
  • Amazon also stopped accepting “non-essential” goods in its warehouses, in order to prioritize the sale of household essentials and medical goods that are in high demand.
  • Finally, in Italy, Amazon decided not to stop its operations, despite some employees testing positive for COVID-19. Controversial as this move may be, Amazon’s private interests are aligned with those of society – maintaining the supply of essential goods is now more important than ever. 

And it is not just Amazon that is seeking to fill the breach left temporarily by brick & mortar retail. Other retailers are also stepping up efforts to distribute their goods online.

  • The apps of traditional retail chains have witnessed record daily downloads (thus relying on the smartphone platforms pioneered by Google and Apple).
  •  Walmart has become the go-to choice for online food purchases:

(Source: Bloomberg)

The shift to online shopping mimics what occurred in China, during its own COVID-19 lockdown. 

  • According to an article published in HBR, e-commerce penetration reached 36.6% of retail sales in China (compared to 29.7% in 2019). The same article explains how Alibaba’s technology is enabling traditional retailers to better manage their supply chains, ultimately helping them to sell their goods online.
  • A study by Nielsen ratings found that 67% of retailers would expand online channels. 
  • One large retailer shut many of its physical stores and redeployed many of its employees to serve as online influencers on WeChat, thus attempting to boost online sales.
  • Spurred by compassion and/or a desire to boost its brand abroad, Alibaba and its founder, Jack Ma, have made large efforts to provide critical medical supplies (notably tests kits and surgical masks) to COVID-hit countries such as the US and Belgium.

And it is not just retail that is adapting to the outbreak. Many restaurants are trying to stay afloat by shifting from in-house dining to deliveries. These attempts have been made possible by the emergence of food delivery platforms, such as UberEats and Deliveroo. 

These platforms have taken several steps to facilitate food deliveries during the outbreak.

  • UberEats announced that it would be waiving delivery fees for independent restaurants.
  • Both UberEats and Deliveroo have put in place systems for deliveries to take place without direct physical contact. While not entirely risk-free, meal delivery can provide welcome relief to people experiencing stressful lockdown conditions.

Similarly, the shares of Blue Apron – an online meal-kit delivery service – have surged more than 600% since the start of the outbreak.

In short, COVID-19 has caused a drastic shift towards contactless retail and food delivery services. It is an open question how much of this shift would have been possible without the pioneering business model innovations brought about by Amazon and its online retail platform, as well as modern food delivery platforms, such as UberEats and Deliveroo. At the very least, it seems unlikely that it would have happened as fast.

The entertainment industry is another area where increasing digitization has made lockdowns more bearable. The reason is obvious: locked-down consumers still require some form of amusement. With physical supply chains under tremendous strain, and social gatherings no longer an option, digital media has thus become the default choice for many.

Data published by Verizon shows a sharp increase (in the week running from March 9 to March 16) in the consumption of digital entertainment, especially gaming:

This echoes other sources, which also report that the use of traditional streaming platforms has surged in areas hit by COVID-19.

  • Netflix subscriptions are said to be spiking in locked-down communities. During the first week of March, Netflix installations increased by 77% in Italy and 33% in Spain, compared to the February average. Netflix app downloads increased by 33% in Hong kong and South Korea. The Amazon Prime app saw a similar increase.
  • YouTube has also witnessed a surge in usage. 
  • Live streaming (on platforms such as Periscope, Twitch, YouTube, Facebook, Instagram, etc) has also increased in popularity. It is notably being used for everything from concerts and comedy clubs to religious services, and even zoo visits.
  • Disney Plus has also been highly popular. According to one source, half of US homes with children under the age of 10 purchased a Disney Plus subscription. This trend is expected to continue during the COVID-19 outbreak. Disney even released Frozen II three months ahead of schedule in order to boost new subscriptions.
  • Hollywood studios have started releasing some of their lower-profile titles directly on streaming services.

Traffic has also increased significantly on popular gaming platforms.

These are just a tiny sample of the many ways in which digital entertainment is filling the void left by social gatherings. It is thus central to the lives of people under lockdown.

2. Cashless payments

But all of the services that are listed above rely on cashless payments – be it to limit the risk or contagion or because these transactions take place remotely. Fintech innovations have thus turned out to be one of the foundations that make social distancing policies viable. 

This is particularly evident in the food industry. 

  • Food delivery platforms, like UberEats and Deliveroo, already relied on mobile payments.
  • Costa coffee (a UK equivalent to starbucks) went cashless in an attempt to limit the spread of COVID-19.
  • Domino’s Pizza, among other franchises, announced that it would move to contactless deliveries.
  • President Donald Trump is said to have discussed plans to keep drive-thru restaurants open during the outbreak. This would also certainly imply exclusively digital payments.
  • And although doubts remain concerning the extent to which the SARS-CoV-2 virus may, or may not, be transmitted via banknotes and coins, many other businesses have preemptively ceased to accept cash payments

As the Jodie Kelley – the CEO of the Electronic Transactions Association – put it, in a CNBC interview:

Contactless payments have come up as a new option for consumers who are much more conscious of what they touch. 

This increased demand for cashless payments has been a blessing for Fintech firms. 

  • Though it is too early to gage the magnitude of this shift, early signs – notably from China – suggest that mobile payments have become more common during the outbreak.
  • In China, Alipay announced that it expected to radically expand its services to new sectors – restaurants, cinema bookings, real estate purchases – in an attempt to compete with WeChat.
  • PayPal has also witnessed an uptick in transactions, though this growth might ultimately be weighed-down by declining economic activity.
  • In the past, Facebook had revealed plans to offer mobile payments across its platforms – Facebook, WhatsApp, Instagram & Libra. Those plans may not have been politically viable at the time. The COVID-19 could conceivably change this.

In short, the COVID-19 outbreak has increased our reliance on digital payments, as these can both take place remotely and, potentially, limit contamination via banknotes. None of this would have been possible twenty years ago when industry pioneers, such as PayPal, were in their infancy. 

3. High speed internet access

Similarly, it goes without saying that none of the above would be possible without the tremendous investments that have been made in broadband infrastructure, most notably by internet service providers. Though these companies have often faced strong criticism from the public, they provide the backbone upon which outbreak-stricken economies can function.

By causing so many activities to move online, the COVID-19 outbreak has put broadband networks to the test. So for, broadband infrastructure around the world has been up to the task. This is partly because the spike in usage has occurred in daytime hours (where network’s capacity is less straine), but also because ISPs traditionally rely on a number of tools to limit peak-time usage.

The biggest increases in usage seem to have occurred in daytime hours. As data from OpenVault illustrates:

According to BT, one of the UK’s largest telecoms operators, daytime internet usage is up by 50%, but peaks are still well within record levels (and other UK operators have made similar claims):

Anecdotal data also suggests that, so far, fixed internet providers have not significantly struggled to handle this increased traffic (the same goes for Content Delivery Networks). Not only were these networks already designed to withstand high peaks in demand, but ISPs have, such as Verizon, increased their  capacity to avoid potential issues.

For instance, internet speed tests performed using Ookla suggest that average download speeds only marginally decreased, it at all, in locked-down regions, compared to previous levels:

However, the same data suggests that mobile networks have faced slightly larger decreases in performance, though these do not appear to be severe. For instance, contrary to contemporaneous reports, a mobile network outage that occurred in the UK is unlikely to have been caused by a COVID-related surge. 

The robustness exhibited by broadband networks is notably due to long-running efforts by ISPs (spurred by competition) to improve download speeds and latency. As one article put it:

For now, cable operators’ and telco providers’ networks are seemingly withstanding the increased demands, which is largely due to the upgrades that they’ve done over the past 10 or so years using technologies such as DOCSIS 3.1 or PON.

Pushed in part by Google Fiber’s launch back in 2012, the large cable operators and telcos, such as AT&T, Verizon, Comcast and Charter Communications, have spent years upgrading their networks to 1-Gig speeds. Prior to those upgrades, cable operators in particular struggled with faster upload speeds, and the slowdown of broadband services during peak usage times, such as after school and in the evenings, as neighborhood nodes became overwhelmed.

This is not without policy ramifications.

For a start, these developments might vindicate antitrust enforcers that allowed mergers that led to higher investments, sometimes at the expense of slight reductions in price competition. This is notably the case for so-called 4 to 3 mergers in the wireless telecommunications industry. As an in-depth literature review by ICLE scholars concludes:

Studies of investment also found that markets with three facilities-based operators had significantly higher levels of investment by individual firms.

Similarly, the COVID-19 outbreak has also cast further doubts over the appropriateness of net neutrality regulations. Indeed, an important criticism of such regulations is that they prevent ISPs from using the price mechanism to manage congestion

It is these fears of congestion, likely unfounded (see above), that led the European Union to urge streaming companies to voluntarily reduce the quality of their products. To date, Netflix, Youtube, Amazon Prime, Apple, Facebook and Disney have complied with the EU’s request. 

This may seem like a trivial problem, but it was totally avoidable. As a result of net neutrality regulation, European authorities and content providers have been forced into an awkward position (likely unfounded) that unnecessarily penalizes those consumers and ISPs who do not face congestion issues (conversely, it lets failing ISPs off the hook and disincentivizes further investments on their part). This is all the more unfortunate that, as argued above, streaming services are essential to locked-down consumers. 

Critics may retort that small quality decreases hardly have any impact on consumers. But, if this is indeed the case, then content providers were using up unnecessary amounts of bandwidth before the COVID-19 outbreak (something that is less likely to occur without net neutrality obligations). And if not, then European consumers have indeed been deprived of something they valued. The shoe is thus on the other foot.

These normative considerations aside, the big point is that we can all be thankful to live in an era of high-speed internet.

 4. Concluding remarks 

Big Tech is rapidly emerging as one of the heroes of the COVID-19 crisis. Companies that were once on the receiving end of daily reproaches – by the press, enforcers, and scholars alike – are gaining renewed appreciation from the public. Times have changed since the early days of these companies – where consumers marvelled at the endless possibilities that their technologies offered. Today we are coming to realize how essential tech companies have become to our daily lives, and how they make society more resilient in the face of fat-tailed events, like pandemics.

The move to a contactless, digital, economy is a critical part of what makes contemporary societies better-equipped to deal with COVID-19. As this post has argued, online delivery, digital entertainment, contactless payments and high speed internet all play a critical role. 

To think that we receive some of these services for free…

Last year, Erik Brynjolfsson, Avinash Collins and Felix Eggers published a paper in PNAS, showing that consumers were willing to pay significant sums for online goods they currently receive free of charge. One can only imagine how much larger those sums would be if that same experiment were repeated today.

Even Big Tech’s critics are willing to recognize the huge debt we owe to these companies. As Stephen Levy wrote, in an article titled “Has the Coronavirus Killed the Techlash?”:

Who knew the techlash was susceptible to a virus?

The pandemic does not make any of the complaints about the tech giants less valid. They are still drivers of surveillance capitalism who duck their fair share of taxes and abuse their power in the marketplace. We in the press must still cover them aggressively and skeptically. And we still need a reckoning that protects the privacy of citizens, levels the competitive playing field, and holds these giants to account. But the momentum for that reckoning doesn’t seem sustainable at a moment when, to prop up our diminished lives, we are desperately dependent on what they’ve built. And glad that they built it.

While it is still early to draw policy lessons from the outbreak, one thing seems clear: the COVID-19 pandemic provides yet further evidence that tech policymakers should be extremely careful not to kill the goose that laid the golden egg, by promoting regulations that may thwart innovation (or the opposite).

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.