Archives For Internet service provider

States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).

All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.

There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.

Some NOFO Requirements May Impinge Broadband Deployment

The infrastructure bill’s statutory text declares that:

Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.

In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.

It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.

This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.

Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.

The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.

While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.

But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”

The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”

For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.

The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.

Focusing on the Truly Unserved and Underserved

The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.

Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.

Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:

For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”

In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.

Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.

Conclusion

The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.

Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Daniel Lyons is a professor of law at Boston College Law School and a visiting fellow at the American Enterprise Institute.]

For many, the chairmanship of Ajit Pai is notable for its many headline-grabbing substantive achievements, including the Restoring Internet Freedom order, 5G deployment, and rural buildout—many of which have been or will be discussed in this symposium. But that conversation is incomplete without also acknowledging Pai’s careful attention to the basic blocking and tackling of running a telecom agency. The last four years at the Federal Communications Commission were marked by small but significant improvements in how the commission functions, and few are more important than the chairman’s commitment to transparency.

Draft Orders: The Dark Ages Before 2017

This commitment is most notable in Pai’s revisions to the open meeting process. From time immemorial, the FCC chairman would set the agenda for the agency’s monthly meeting by circulating draft orders to the other commissioners three weeks in advance. But the public was deliberately excluded from that distribution list. During this period, the commissioners would read proposals, negotiate revisions behind the scenes, then meet publicly to vote on final agency action. But only after the meeting—often several days later—would the actual text of the order be made public.

The opacity of this process had several adverse consequences. Most obviously, the public lacked details about the substance of the commission’s deliberations. The Government in the Sunshine Act requires the agency’s meetings to be made public so the American people know what their government is doing. But without the text of the orders under consideration, the public had only a superficial understanding of what was happening each month. The process was reminiscent of House Speaker Nancy Pelosi’s famous gaffe that Congress needed to “pass the [Affordable Care Act] bill so that you can find out what’s in it.” During the high-profile deliberations over the Open Internet Order in 2015, then-Commissioner Pai made significant hay over this secrecy, repeatedly posting pictures of himself with the 300-plus-page order on Twitter with captions such as “I wish the public could see what’s inside” and “the public still can’t see it.”

Other consequences were less apparent, but more detrimental. Because the public lacked detail about key initiatives, the telecom media cycle could be manipulated by strategic leaks designed to shape the final vote. As then-Commissioner Pai testified to Congress in 2016:

[T]he public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Sometimes, this secrecy backfired on the chairman, such as when net-neutrality advocates used media pressure to shape the 2014 Open Internet NPRM. Then-Chairman Tom Wheeler’s proposed order sought to follow the roadmap laid out by the D.C. Circuit’s Verizon decision, which relied on Title I to prevent ISPs from blocking content or acting in a “commercially unreasonable manner.” Proponents of a more aggressive Title II approach leaked these details to the media in a negative light, prompting tech journalists and advocates to unleash a wave of criticism alleging the chairman was “killing off net neutrality to…let the big broadband providers double charge.” In full damage control mode, Wheeler attempted to “set the record straight” about “a great deal of misinformation that has recently surfaced regarding” the draft order. But the tempest created by these leaks continued, pressuring Wheeler into adding a Title II option to the NPRM—which, of course, became the basis of the 2015 final rule.

This secrecy also harmed agency bipartisanship, as minority commissioners sometimes felt as much in the dark as the general public. As Wheeler scrambled to address Title II advocates’ concerns, he reportedly shared revised drafts with fellow Democrats but did not circulate the final draft to Republicans until less than 48 hours before the vote—leading Pai to remark cheekily that “when it comes to the Chairman’s latest net neutrality proposal, the Democratic Commissioners are in the fast lane and the Republican Commissioners apparently are being throttled.” Similarly, Pai complained during the 2014 spectrum screen proceeding that “I was not provided a final version of the item until 11:50 p.m. the night before the vote and it was a substantially different document with substantively revised reasoning than the one that was previously circulated.”

Letting the Sunshine In

Eliminating this culture of secrecy was one of Pai’s first decisions as chairman. Less than a month after assuming the reins at the agency, he announced that the FCC would publish all draft items at the same time they are circulated to commissioners, typically three weeks before each monthly meeting. While this move was largely applauded, some were concerned that this transparency would hamper the agency’s operations. One critic suggested that pre-meeting publication would hamper negotiations among commissioners: “Usually, drafts created negotiating room…Now the chairman’s negotiating position looks like a final position, which undercuts negotiating ability.” Another, while supportive of the change, was concerned that the need to put a draft order in final form well before a meeting might add “a month or more to the FCC’s rulemaking adoption process.”

Fortunately, these concerns proved to be unfounded. The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan—compared to 33% and 69.9%, respectively, under Chairman Wheeler. 

This increased transparency also improved the overall quality of the agency’s work product. In a 2018 speech before the Free State Foundation, Commissioner Mike O’Rielly explained that “drafts are now more complete and more polished prior to the public reveal, so edits prior to the meeting are coming from Commissioners, as opposed to there being last minute changes—or rewrites—from staff or the Office of General Counsel.” Publishing draft orders in advance allows the public to flag potential issues for revision before the meeting, which improves the quality of the final draft and reduces the risk of successful post-meeting challenges via motions for reconsideration or petitions for judicial review. O’Rielly went on to note that the agency seemed to be running more efficiently as well, as “[m]eetings are targeted to specific issues, unnecessary discussions of non-existent issues have been eliminated, [and] conversations are more productive.”

Other Reforms

While pre-meeting publication was the most visible improvement to agency transparency, there are other initiatives also worth mentioning.

  • Limiting Editorial Privileges: Chairman Pai dramatically limited “editorial privileges,” a longtime tradition that allowed agency staff to make changes to an order’s text even after the final vote. Under Pai, editorial privileges were limited to technical and conforming edits only; substantive changes were not permitted unless they were proposed directly by a commissioner and only in response to new arguments offered by a dissenting commissioner. This reduces the likelihood of a significant change being introduced outside the public eye.
  • Fact Sheet: Adopting a suggestion of Commissioner Mignon Clyburn, Pai made it a practice to preface each published draft order with a one-page fact sheet that summarized the item in lay terms, as much as possible. This made the agency’s monthly work more accessible and transparent to members of the public who lacked the time to wade through the full text of each draft order.
  • Online Transparency Dashboard: Pai also launched an online dashboard on the agency’s website. This dashboard offers metrics on the number of items currently pending at the commission by category, as well as quarterly trends over time.
  • Restricting Comment on Upcoming Items: As a gesture of respect to fellow commissioners, Pai committed that the chairman’s office would not brief the press or members of the public, or publish a blog, about an upcoming matter before it was shared with other commissioners. This was another step toward reducing the strategic use of leaks or selective access to guide the tech media news cycle.

And while it’s technically not a transparency reform, Pai also deserves credit for his willingness to engage the public as the face of the agency. He was the first FCC commissioner to join Twitter, and throughout his chairmanship he maintained an active social media presence that helped personalize the agency and make it more accessible. His commitment to this channel is all the more impressive when one considers the way some opponents used these platforms to hurl a steady stream of hateful, often violent and racist invective at him during his tenure.

Pai deserves tremendous credit for spearheading these efforts to bring the agency out of the shadows and into the sunlight. Of course, he was not working alone. Pai shares credit with other commissioners and staff who supported transparency and worked to bring these policies to fruition, most notably former Commissioner O’Rielly, who beat a steady drum for process reform throughout his tenure.

We do not yet know who President Joe Biden will appoint as Pai’s successor. It is fair to assume that whomever is chosen will seek to put his or her own stamp on the agency. But let’s hope that enhanced transparency and the other process reforms enacted over the past four years remain a staple of agency practice moving forward. They may not be flashy, but they may prove to be the most significant and long-lasting impact of the Pai chairmanship.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Randy May is president of the Free State Foundation.]

I am pleased to participate in this retrospective symposium regarding Ajit Pai’s tenure as Federal Communications Commission chairman. I have been closely involved in communications law and policy for nearly 45 years, and, as I’ve said several times since Chairman Pai announced his departure, he will leave as one of the most consequential leaders in the agency’s history. And, I should hastily add, consequential in a positive way, because it’s possible to be consequential in a not-so-positive way.

Chairman Pai’s leadership has been impactful in many different areas—for example, spectrum availability, media deregulation, and institutional reform, to name three—but in this tribute I will focus on his efforts regarding “net neutrality.” I use the quotes because the term has been used by many to mean many different things in many different contexts.

Within a year of becoming chairman, and with the support of fellow Republican commissioners Michael O’Rielly and Brendan Carr, Ajit Pai led the agency in reversing the public utility-like “net neutrality” regulation that had been imposed by the Obama FCC in February 2015 in what became known as the Title II Order. The Title II Order had classified internet service providers (ISPs) as “telecommunications carriers” subject to the same common-carrier regulatory regime imposed on monopolistic Ma Bell during most of the 20th century. While “forbearing” from imposing the full array of traditional common-carrier regulatory mandates, the Title II Order also subjected ISPs to sanctions if they violated an amorphous “general conduct standard,” which provided that ISPs could not “unreasonably” interfere with or disadvantage end users or edge providers like Google, Facebook, and the like.

The aptly styled Restoring Internet Freedom Order (RIF Order), adopted in December 2017, reversed nearly all of the Title II Order’s heavy-handed regulation of ISPs in favor of a light-touch regulatory regime. It was aptly named, because the RIF Order “restored” market “freedom” to internet access regulation that had mostly prevailed since the turn of the 21st century. It’s worth remembering that, in 1999, in opting not to require that newly emerging cable broadband providers be subjected to a public utility-style regime, Clinton-appointee FCC Chairman William Kennard declared: “[T]he alternative is to go to the telephone world…and just pick up this whole morass of regulation and dump it wholesale on the cable pipe. That is not good for America.” And worth recalling, too, that in 2002, the commission, under the leadership of Chairman Michael Powell, determined that “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.”

It was this reliance on market freedom that was “restored” under Ajit Pai’s leadership. In an appearance at a Free State Foundation event in December 2016, barely a month before becoming chairman, then-Commissioner Pai declared: “It is time to fire up the weed whacker and remove those rules that are holding back investment, innovation, and job creation.” And he added: “Proof of market failure should guide the next commission’s consideration of new regulations.” True to his word, the weed whacker was used to cut down the public utility regime imposed on ISPs by his predecessor. And the lack of proof of any demonstrable market failure was at the core of the RIF Order’s reasoning.

It is true that, as a matter of law, the D.C. Circuit’s affirmance of the Restoring Internet Freedom Order in Mozilla v. FCC rested heavily on the application by the court of Chevron deference, just as it is true that Chevron deference played a central role in the affirmance of the Title II Order and the Brand X decision before that. And it would be disingenuous to suggest that, if a newly reconstituted Biden FCC reinstitutes a public utility-like regulatory regime for ISPs, that Chevron deference won’t once again play a central role in the appeal.

But optimist that I am, and focusing not on what possibly may be done as a matter of law, but on what ought to be done as a matter of policy, the “new” FCC should leave in place the RIF Order’s light-touch regulatory regime. In affirming most of the RIF Order in Mozilla, the D.C. Circuit agreed there was substantial evidence supporting the commission’s predictive judgment that reclassification of ISPs “away from public-utility style regulation” was “likely to increase ISP investment and output.” And the court agreed there was substantial evidence to support the commission’s position that such regulation is especially inapt for “a dynamic industry built on technological development and disruption.”

Indeed, the evidence has only become more substantial since the RIF Order’s adoption. Here are only a few factual snippets: According to CTIA, wireless-industry investment for 2019 grew to $29.1 billion, up from $27.4 billion in 2018 and $25.6 billion in 2017USTelecom estimates that wireline broadband ISPs invested approximately $80 billion in network infrastructure in 2018, up more than $3.1 billion from $76.9 billion in 2017. And total investment most likely increased in 2019 for wireline ISPs like it did for wireless ISPs. Figures cited in the FCC’s 2020 Broadband Deployment Report indicate that fiber broadband networks reached an additional 6.5 million homes in 2019, a 16% increase over the prior year and the largest single-year increase ever

Additionally, more Americans have access to broadband internet access services, and at ever higher speeds. According to an April 2020 report by USTelecom, for example, gigabit internet service is available to at least 85% of U.S. homes, compared to only 6% of U.S. homes three-and-a-half years ago. In an October 2020 blog post, Chairman Pai observed that “average download speeds for fixed broadband in the United States have doubled, increasing by over 99%” since the RIF Order was adopted. Ookla Speedtests similarly show significant gains in mobile wireless speeds, climbing to 47/10 Mbps in September 2020 compared to 27/8 Mbps in the first half of 2018.

More evidentiary support could be offered regarding the positive results that followed adoption of the RIF Order, and I assume in the coming year it will be. But the import of abandonment of public utility-like regulation of ISPs should be clear.

There is certainly much that Ajit Pai, the first-generation son of immigrants who came to America seeking opportunity in the freedom it offered, accomplished during his tenure. To my way of thinking, “Restoring Internet Freedom” ranks at—or at least near—the top of the list.

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

With Matt Starr, Berin Szoka and Geoffrey Manne

Today’s oral argument in the D.C Circuit over the FCC’s Net Neutrality rules suggests that the case — Verizon v. FCC — is likely to turn on whether the Order impermissibly imposes common carrier regulation on broadband ISPs. If so, the FCC will lose, no matter what the court thinks of the Commission’s sharply contested claims of authority under the Telecommunications Act.

The FCC won last year before the same court when Verizon challenged its order mandating that carriers provide data roaming services to their competitors’ customers. But Judge Tatel, who wrote the Cellco decision is likely to write the court’s opinion overturning the Net neutrality rules — just as he wrote the court’s 2010 Comcast v. FCC opinion, thwarting the FCC’s first attempt at informal net neutrality regulation.

Over an extraordinary two-hour session, Judges Tatel and Silberman asked a barrage of questions that suggest they’ll apply the same test used to uphold the data roaming rule to strike down at least the non-discrimination rule at the heart of the Open Internet Order — and probably, the entire Order.

Common Carrier Analysis

The Communications Act explicitly prohibits treating services that are not regulated under Title II as common carriers. Title II regulates “telecommunications services,” such as landline telephone service, but broadband is an “information service” regulated under Title I of the Act, while wireless is regulated under Title III of the Act (as a “radio transmission”).

In Cellco, the court ruled that the FCC’s data roaming rule did not impermissibly classify mobile providers as common carriers even though it compelled wireless carriers to let other companies’ subscribers roam on their networks. Here, the Open Internet Order effectively forces ISPs to carry traffic of all “edge” providers in an equal, non-discriminatory manner. While these might seem similar, the two mandates differ significantly, and Tatel’s analysis in the data roaming case may lead to precisely the opposite result here.

Tatel’s data roaming opinion rested on a test, derived from decades of case law, for determining what level of regulation constitutes an impermissible imposition of common carrier status:

  1. “If a carrier is forced to offer service indiscriminately and on general terms, then that carrier is being relegated to common carrier status”;

  2. “[T]he Commission has significant latitude to determine the bounds of common carriage in particular cases”;

  3. “[C]ommon carriage is not all or nothing—there is a gray area [between common carrier status and private carrier status] in which although a given regulation might be applied to common carriers, the obligations imposed are not common carriage per se” because they permit carriers to retain sufficient decisionmaking authority over their networks (by retaining programming control and/or the authority to negotiate terms, for example); and

  4. In this gray area, “[the FCC’s] determination that a regulation does or does not confer common carrier status warrants deference” under the Supreme Court’s Chevron decision.

In Cellco, the court determined that the data roaming rule fell into the gray area, and thus deferred to the FCC’s determination that the regulation did not impose common carrier status. The essential distinction, according to the court, was that carriers remained free to “negotiate the terms of their roaming arrangements on an individualized basis,” provided their terms were “commercially reasonable.” Rather than impose a “presumption of reasonableness,” the Commission offered “considerable flexibility for providers to respond to the competitive forces at play in the mobile-data market.” Thus, the court held, the data roaming rule “leaves substantial room for individualized bargaining and discrimination in terms,” and thus “does not amount to a duty to hold out facilities indifferently for public use.”

The Open Internet rules, by contrast, impose a much harsher restriction on what ISPs may do with their broadband networks, barring them from blocking any legal content and prohibiting “unreasonable” discrimination. Judges Tatel and Silberman repeatedly asked questions that suggested that the Order’s reasonable discrimination rule removed the kind of “flexibility” that justified upholding the data roaming rule. By requiring carriers to “offer service indiscriminately and on general terms” and to “hold out facilities indifferently for public use” (to quote the D.C. Circuit’s test), the rule would go beyond the “gray area” in which the FCC gets deference, and fall into the D.C. Circuit’s definition of common carriage. If that’s indeed ultimately where the two judges wind up, it’s game over for the FCC.

The Open Internet Order requires broadband ISPs to make their networks available, and to do so on equal terms that remove pricing flexibility, to any edge provider that wishes to have its content available on an ISP’s network. This seems to be Judge Tatel’s interpretation of ¶ 76 of the Order, which goes on at length about the reasons why “pay for priority” arrangements would “raise significant cause for concern” and then concludes: “In light of each of these concerns, as a general matter, it’s unlikely that pay for priority would satisfy the ‘no unreasonable discrimination’ standard.” So… legal in principle, but effectively banned in practice — a per se rule dressed up as a rule of reason.

If that isn’t, in effect, a requirement that ISPs hold out their networks “indifferently for public use,” it’s hard to imagine what is — as Tatel certainly seemed to think today. Tatel’s use of the term “indiscriminately” in Cellco almost hints that the test was written with the FCC’s “no discrimination” rule in mind.

The FCC tried, but failed, to address such concerns in the Open Internet Order, by arguing that broadband providers remained free to “make individualized decisions” with the only customers that matter: their subscribers. Today, the agency again insisted that restricting, however heavily, a broadband provider’s ability to negotiate with an edge provider (or the backbone providers in between) is irrelevant to the analysis of whether the FCC has illegally imposed common carriage. But if that argument worked, the D.C. Circuit would not have had to analyze whether the data roaming rule afforded sufficient flexibility to carriers in contracting with other carriers to provide data roaming services to their customers.

Similarly, the FCC failed today, and in its briefs, to effectively distinguish this case from Midwest Video II, which was critical to the Cellco decision. here, the Supreme Court court struck down public-access rules imposed on cable companies as impermissible common carrier regulation because they “prohibited [cable operators] from determining or influencing the content of access programming,” and “delimit[ed] what [they could] charge for access and use of equipment.” In other words, the FCC’s rule left no flexibility for negotiations between companies — the same problem as in the Open Internet Order. The FCC attempted to distinguish the two cases by arguing that the FCC was restricting an existing wholesale market for channel carriage, while no such market exists today for prioritized Internet services. But this misses the key point made, emphatically, by Judge Silberman: it is the FCC’s relentless attempt to regulate Net Neutrality that has prevented the development of this market. Nothing better reveals the stasis mentality behind the FCC’s Order

Perhaps the most damning moment of today’s arguments occurred when Verizon’s lawyer responded to questions about what room for negotiation was left under the unreasonable discrimination rule — by pointing to what the FCC itself said in Footnote 240 of the Order. There the FCC quotes, approvingly, comments filed by Sprint: “The unreasonable discrimination standard contained in Section 202(a) of the Act contains the very flexibility the Commission needs to distinguish desirable from improper discrimination.” In other words, the only room for “commercially reasonable negotiation” recognized by the FCC under the nondiscrimination rule is found in the limited discretion traditionally available to common carriers under Section 202(a). Oops. This #LawyerFail will doubtless feature prominently in the court’s discussion of this issue, as the FCC’s perhaps accidental concession that, whatever the agency claims, it’s really imposing common carrier status — analogous to Title II, no less!

Judges Tatel and Silberman seemed to disagree only as to whether the no-blocking rule would also fail under Cellco’s reasoning. Tatel suggested that if the non-discrimination rule didn’t exist, the blocking rule, standing alone, would “leave substantial room for individualized bargaining and discrimination in terms” just as the data roaming rule did. Tatel spent perhaps fifteen minutes trying to draw clear answers from all counsel on this point, but seemed convinced that, at most, the no-blocking rule simply imposed a duty on the broadband provider to allow an edge provider to reach its customers, while still allowing the broadband provider to negotiate for faster carriage on “commercially reasonable terms.” Silberman disagreed, insisting that the blocking rule still imposed a common carrier duty to carry traffic at a zero price.

Severability

Ultimately the distinction between these two rules under Cellco’s common carriage test may not matter. If the court decides that the order is not severable, striking down the nondiscrimination rule as common carriage would cause the entire Order to fall.

The judges got into an interesting, though relatively short, discussion of this point. Verizon’s counsel repeatedly noted that the FCC had never stated any intention that the order should be read as severable either in the Order, in its briefs or even at oral argument. Unlike in MD/DC/DE Broadaster’s Assoc. v. FCC, the Commission did not state in the adopting regulation that it intended to treat the regulation as severable. And, as the DC Circuit has stated, “[s]everance and affirmance of a portion of an administrative regulation is improper if there is ‘substantial doubt’ that the agency would have adopted the severed portion on its own.”

The question, as the Supreme Court held in K Mart Corp. v. Cartier, Inc., is whether the remainder of the regulation could function sensibly without the stricken provision. This isn’t clear. While Judge Tatel seems to suggest that the rule against blocking could function without the nondiscrimination rule, Judge Silberman seems convinced that the two were intended as necessary complements by the FCC. The determination of the no-blocking rule’s severability may come down to Judge Rogers, who didn’t telegraph her view.

So what’s next?

The prediction made by Fred Campbell shortly after the Cellco decision seems like the most likely outcome: Tatel, joined by at least Silberman, could strike down the entire Order as imposing common carriage — while offering the FCC a roadmap to try its hand at Net Neutrality yet again by rewriting the discrimination rule to allow for prioritized or accelerated carriage on commercially reasonable terms.

Or, if the the court decides the order is severable, it could strike down just the nondiscrimination rule — assuming the court could find either direct or ancillary jurisdiction for both the transparency rule and the non-discrimination rule.

Either way, an FCC loss will mean that negotiated arrangements for priority carriage will be governed under something more like a rule of reason. The FCC could try to create its own rule.  Or the matter could simply be left to the antitrust and consumer protection laws enforced by the Department of Justice, the Federal Trade Commission, the states and private plaintiffs. We think the latter’s definitely the best approach. But whether it is or not, it will be the controlling legal authority on the ground the day the FCC loses — unless and until the FCC issues revised rules (or Congress passes a law) that can survive judicial review.

Ultimately, we suspect the FCC will have a hard time letting go. After 79 years, it’s clearly in denial about its growing obsolescence.

By Geoffrey Manne & Berin Szoka

As Democrats insist that income taxes on the 1% must go up in the name of fairness, one Democratic Senator wants to make sure that the 1% of heaviest Internet users pay the same price as the rest of us. It’s ironic how confused social justice gets when the Internet’s involved.

Senator Ron Wyden is beloved by defenders of Internet freedom, most notably for blocking the Protect IP bill—sister to the more infamous SOPA—in the Senate. He’s widely celebrated as one of the most tech-savvy members of Congress. But his latest bill, the “Data Cap Integrity Act,” is a bizarre, reverse-Robin Hood form of price control for broadband. It should offend those who defend Internet freedom just as much as SOPA did.

Wyden worries that “data caps” will discourage Internet use and allow “Internet providers to extract monopoly rents,” quoting a New York Times editorial from July that stirred up a tempest in a teapot. But his fears are straw men, based on four false premises.

First, US ISPs aren’t “capping” anyone’s broadband; they’re experimenting with usage-based pricing—service tiers. If you want more than the basic tier, your usage isn’t capped: you can always pay more for more bandwidth. But few users will actually exceed that basic tier. For example, Comcast’s basic tier, 300 GB/month, is so generous that 98.5% of users will not exceed it. That’s enough for 130 hours of HD video each month (two full-length movies a day) or between 300 and 1000 hours of standard (compressed) video streaming. Continue Reading…