Archives For telecommunications

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

Every 5 years, Congress has to reauthorize the sunsetting provisions of the Satellite Television Extension and Localism Act (STELA). And the deadline for renewing the law is quickly approaching (Dec. 31). While sunsetting is, in the abstract, seemingly a good thing to ensure rules don’t become outdated, there are an interlocking set of interest groups who, generally speaking, only support reauthorizing the law because they are locked in a regulatory stalemate. STELA no longer represents an optimal outcome for many if not most of the affected parties. The time is now for finally allowing STELA to sunset, and using this occasion to further reform the underlying regulatory morass it is built upon.

Since the creation of STELA in 1988, much has changed in the marketplace. At the time of the 1992 Cable Act (the first year data from the FCC’s Video Competition Reports is available), cable providers served 95% of multichannel video subscribers. Now, the power of cable has waned to the extent that 2 of the top 4 multichannel video programming distributors (MVPDs) are satellite providers, without even considering the explosion in competition from online video distributors like Netflix and Amazon Prime.

Given these developments, Congress should reconsider whether STELA is necessary at all, along with the whole complex regulatory structure undergirding it, and consider the relative simplicity with which copyright and antitrust law are capable of adequately facilitating the market for broadcast content negotiations. An approach building upon that contemplated in the bipartisan Modern Television Act of 2019 by Congressman Steve Scalise (R-LA) and Congresswoman Anna Eshoo (D-CA)—which would repeal the compulsory license/retransmission consent regime for both cable and satellite—would be a step in the right direction.

A brief history of STELA

STELA, originally known as the 1988 Satellite Home Viewer Act, was originally justified as necessary to promote satellite competition against incumbent cable networks and to give satellite companies stronger negotiating positions against network broadcasters. In particular, the goal was to give satellite providers the ability to transmit terrestrial network broadcasts to subscribers. To do this, this regulatory structure modified the Communications Act and the Copyright Act. 

With the 1988 Satellite Home Viewer Act, Congress created a compulsory license for satellite retransmissions under Section 119 of the Copyright Act. This compulsory license provision mandated, just as the Cable Act did for cable providers, that satellite would have the right to certain network broadcast content in exchange for a government-set price (despite the fact that local network affiliates don’t necessarily own the copyrights themselves). The retransmission consent provision requires satellite providers (and cable providers under the Cable Act) to negotiate with network broadcasters for the fee to be paid for the right to network broadcast content. 

Alternatively, broadcasters can opt to impose must-carry provisions on cable and satellite  in lieu of retransmission consent negotiations. These provisions require satellite and cable operators to carry many channels from network broadcasters in order to have access to their content. As ICLE President Geoffrey Manne explained to Congress previously:

The must-carry rules require that, for cable providers offering 12 or more channels in their basic tier, at least one-third of these be local broadcast retransmissions. The forced carriage of additional, less-favored local channels results in a “tax on capacity,” and at the margins causes a reduction in quality… In the end, must-carry rules effectively transfer significant programming decisions from cable providers to broadcast stations, to the detriment of consumers… Although the ability of local broadcasters to opt in to retransmission consent in lieu of must-carry permits negotiation between local broadcasters and cable providers over the price of retransmission, must-carry sets a floor on this price, ensuring that payment never flows from broadcasters to cable providers for carriage, even though for some content this is surely the efficient transaction.

The essential question about the reauthorization of STELA regards the following provisions: 

  1. an exemption from retransmission consent requirements for satellite operators for the carriage of distant network signals to “unserved households” while maintaining the compulsory license right for those signals (modification of the compulsory license/retransmission consent regime);
  2. the prohibition on exclusive retransmission consent contracts between MVPDs and network broadcasters (per se ban on a business model); and
  3. the requirement that television broadcast stations and MVPDs negotiate in good faith (nebulous negotiating standard reviewed by FCC).

This regulatory scheme was supposed to sunset after 5 years. Instead of actually sunsetting, Congress has consistently reauthorized STELA ( in 1994, 1999, 2004, 2010, and 2014).

Each time, satellite companies like DirecTV & Dish Network, as well as interest groups representing rural customers who depend heavily on satellite for cable television, strongly supported the renewal of the legislation. Over time, though, the reauthorization has led to amendments supported by major players from each side of the negotiating table and broad support for what is widely considered “must-pass” legislation. In other words, every affected industry found something they liked about the compromise legislation.

As it stands, the sunset provision of STELA has meant that it gives each side negotiating leverage during the next round of reauthorization talks, and often concessions are drawn. But rather than simplifying this regulatory morass, STELA reauthorization simply extends rules that have outlived their purpose.

Current marketplace competition undermines the necessity of STELA reauthorization

The marketplace is very different in 2019 than it was when STELA’s predecessors were adopted and reauthorized. No longer is it the case that cable dominates and that satellite and other providers need a leg up just to compete. Moreover, there are now services that didn’t even exist when the STELA framework was first developed. Competition is thriving.

Wikipedia:

RankServiceSubscribersProviderType
1.Xfinity21,986,000ComcastCable
2.DirecTV19,222,000AT&TSatellite
3.Spectrum16,606,000CharterCable
4.Dish9,905,000Dish NetworkSatellite
5.Verizon Fios TV4,451,000VerizonFiber-Optic
6.Cox Cable TV4,015,000Cox EnterprisesCable
7.U-Verse TV3,704,000AT&TFiber-Optic
8.Optimum/Suddenlink3,307,500Altice USACable
9.Sling TV*2,417,000Dish NetworkLive Streaming
10.Hulu with Live TV2,000,000Hulu(Disney, Comcast, AT&T)Live Streaming
11.DirecTV Now1,591,000AT&TLive Streaming
12.YouTube TV1,000,000Google(Alphabet)Live Streaming
13.Frontier FiOS838,000FrontierFiber-Optic
14.Mediacom776,000MediacomCable
15.PlayStation Vue500,000SonyLive Streaming
16.CableOne Cable TV326,423Cable OneCable
17.FuboTV250,000FuboTVLive Streaming

A 2018 accounting of the largest MVPDs by subscribers shows that satellite is 2 of the top 4, and that over-the-top services like Sling TV, Hulu with LiveTV, and YouTube TV are gaining significantly. And this does not even consider (non-live) streaming services such as Netflix (approximately 60 million US subscribers), Hulu (about 28 million US subscribers) and Amazon Prime Video (which has about 40 million users in the US). It is not clear from these numbers that satellite needs special rules in order to compete with cable, or that the complex regulatory regime underlying STELA is necessary anymore.

On the contrary, there seems to be a lot of reason to believe that content is king, and the market for the distribution of that content is thriving. Competition among platforms is intense, not only among MVPDs like Comcast, DirecTV, Charter, and Dish Network, but from streaming services like Netflix, Amazon Prime Video, Hulu, and HBONow. Distribution networks heavily invest in exclusive content to attract consumers. There is no reason to think that we need selective forbearance from the byzantine regulations in this space in order to promote satellite adoption when satellite companies are just as good as any at contracting for high-demand content (for instance DirecTV with NFL Sunday Ticket). 

A better way forward: Streamlined regulation in the form of copyright and antitrust

As Geoffrey Manne said in his Congressional testimony on STELA reauthorization back in 2013: 

behind all these special outdated regulations are laws of general application that govern the rest of the economy: antitrust and copyright. These are better, more resilient rules. They are simple rules for a complex world. They will stand up far better as video technology evolves–and they don’t need to be sunsetted.

Copyright law establishes clearly defined rights, thereby permitting efficient bargaining between content owners and distributors. But under the compulsory license system, the copyright holders’ right to a performance license is fundamentally abridged. Retransmission consent normally requires fees to be paid for the content that MVPDs have available to them. But STELA exempts certain network broadcasts (“distant signals” for “unserved households”) from retransmission consent requirements. This reduces incentives to develop content subject to STELA, which at the margin harms both content creators and viewers. It also gives satellite an unfair advantage vis-a-vis cable in those cases it does not need to pay ever-rising retransmission consent fees. Ironically, it also reduces the incentive for satellite providers (DirecTV, at least) to work to provide local content to some rural consumers. Congress should reform the law to allow copyright holders to have their full rights under the Copyright Act again. Congress should also repeal the compulsory license and must-carry provisions that work at cross-purposes and allow true marketplace negotiations.

The initial allocation of property rights guaranteed under copyright law would allow for MVPDs, including satellite providers, to negotiate with copyright holders for content, and thereby realize a more efficient set of content distribution outcomes than is otherwise possible. Under the compulsory license/retransmission consent regime underlying both STELA and the Cable Act, the outcomes at best approximate those that would occur through pure private ordering but in most cases lead to economically inefficient results because of the thumb on the scale in favor of the broadcasters. 

In a similar way, just as copyright law provides a superior set of bargaining conditions for content negotiation, antitrust law provides a superior mechanism for policing potentially problematic conduct between the firms involved. Under STELA, the FCC polices transactions with a “good faith” standard. In an important sense, this ambiguous regulatory discretion provides little information to prospective buyers and sellers of licenses as to what counts as “good faith” negotiations (aside from the specific practices listed).

By contrast, antitrust law, guided by the consumer welfare standard and decades of case law, is designed both to deter potential anticompetitive foreclosure and also to provide a clear standard for firms engaged in the marketplace. The effect of relying on antitrust law to police competitive harms is — as the name of the standard suggest — a net increase in the welfare of consumers, the ultimate beneficiaries of a well functioning market. 

For instance, consider a hypothetical dispute between a network broadcaster and a satellite provider. Under the FCC’s “good faith” oversight, bargaining disputes, which are increasingly resulting in blackouts, are reviewed for certain negotiating practices deemed to be unfair, 47 CFR § 76.65(b)(1), and by a more general “totality of the circumstances” standard, 47 CFR § 76.65(b)(2). This is both over- and under-inclusive as the negotiating practices listed in (b)(1) may have procompetitive benefits in certain circumstances, and the (b)(2) totality of the circumstances standard is vague and ill-defined. By comparison, antitrust claims would be adjudicated through a foreseeable process with reference to a consumer welfare standard illuminated by economic evidence and case law.

If a satellite provider alleges anticompetitive foreclosure by a refusal to license, its claims would be subject to analysis under the Sherman Act. In order to prove its case, it would need to show that the network broadcaster has power in a properly defined market and is using that market power to foreclose competition by leveraging its ownership over network content to the detriment of consumer welfare. A court would then analyze whether this refusal of a duty to deal is a violation of antitrust law under the Trinko and Aspen Skiing standards. Economic evidence would need to be introduced that supports the allegation. 

And, critically, in this process, the defendants would be entitled to raise evidence in their case — both evidence suggesting that there was no foreclosure, as well as evidence of procompetitive justifications for decisions that otherwise may be considered foreclosure. Ultimately, a court, bound by established, nondiscretionary standards would weigh the evidence and make a determination. It is, of course, possible, that a review for “good faith” conduct could reach the correct result, but there is simply not a similarly rigorous process available to consistently push it in that direction.

The above-mentioned Modern Television Act of 2019 does represent a step in the right direction, as it would repeal the compulsory license/retransmission consent regime applied to both cable and satellite operators. However, it is imperfect as it does leave must carry requirements in place for local content and retains the “good faith” negotiating standard to be enforced by the FCC. 

Expiration is better than the status quo even if fundamental reform is not possible

Some scholars who have written on this issue, and very much agree that fundamental reform is needed, nonetheless argue that STELA should be renewed if more fundamental reforms like those described above can’t be achieved. For instance, George Ford recently wrote that 

With limited days left in the legislative calendar before STELAR expires, there is insufficient time for a sensible solution to this complex issue. Senate Commerce Committee Chairman Roger Wicker (R-Miss.) has offered a “clean” STELAR reauthorization bill to maintain the status quo, which would provide Congress with some much-needed breathing room to begin tackling the gnarly issue of how broadcast signals can be both widely retransmitted and compensated. Congress and the Trump administration should welcome this opportunity.

However, even in a world without more fundamental reform, it is not clear that satellite needs distant signals in order to compete with cable. The number of “short markets”—i.e. those without access to all four local network broadcasts—implicated by the loss of distant signals is relatively few. Regardless of how bad the overall regulatory scheme needs to be updated, it makes no sense to continue to preserve STELA’s provisions that benefit satellite when it is no longer necessary on competition grounds.

Conclusion

Congress should not only let STELA sunset, but it should consider reforming the entire compulsory license/retransmission consent regime as the Modern Television Act of 2019 aims to do. In fact, reformers should look to go even further in repealing must-carry provisions and the good faith negotiating standard enforced by the FCC. Copyright and antitrust law are much better rules for this constantly evolving space than the current sector-specific rules. 

For previous work from ICLE on STELA see The Future of Video Marketplace Regulation (written testimony of ICLE President Geoffrey Manne from June 12, 2013) and Joint Comments of ICLE and TechFreedom, In the Matter of STELA Reauthorization and Video Programming Reform (March 19, 2014). 

On March 19-20, 2020, the University of Nebraska College of Law will be hosting its third annual roundtable on closing the digital divide. UNL is expanding its program this year to include a one-day roundtable that focuses on the work of academics and researchers who are conducting empirical studies of the rural digital divide. 

Academics and researchers interested in having their work featured in this event are now invited to submit pieces for consideration. Submissions should be submitted by November 18th, 2019 using this form. The authors of papers and projects selected for inclusion will be notified by December 9, 2019. Research honoraria of up to $5,000 may be awarded for selected projects.

Example topics include cost studies of rural wireless deployments, comparative studies of the effects of ACAM funding, event studies of legislative interventions such as allowing customers unserved by carriers in their home exchange to request service from carriers in adjoining exchanges, comparative studies of the effectiveness of various federal and state funding mechanisms, and cost studies of different sorts of municipal deployments. This list is far from exhaustive.

Any questions about this event or the request for projects can be directed to Gus Hurwitz at ghurwitz@unl.edu or Elsbeth Magilton at elsbeth@unl.edu.

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here

On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.

At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.

The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:

  • It has been engaging in “exclusionary conduct”  that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.  
  • Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
  • The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
  • Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.

The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:

[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.

Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court  found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.

It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.

(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)

On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.

In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):

  1. The Telecommunications Industry Association (TIA) and
  2. The Alliance for Telecommunications Industry Solutions (ATIS).

These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.

The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for  extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.”

A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.

Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.

The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.

Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.

In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.

These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.

An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”

It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”  

Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.

The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”

Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:

[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.

Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.

The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.  As discussed, the policies of ATIS and TIA are unclear on this.

In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidence on what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.

FCC Commissioner Rosenworcel penned an article this week on the doublespeak coming out of the current administration with respect to trade and telecom policy. On one hand, she argues, the administration has proclaimed 5G to be an essential part of our future commercial and defense interests. But, she tells us, the administration has, on the other hand, imposed tariffs on Chinese products that are important for the development of 5G infrastructure, thereby raising the costs of roll-out. This is a sound critique: regardless where one stands on the reasonableness of tariffs, they unquestionably raise the prices of goods on which they are placed, and raising the price of inputs to the 5G ecosystem can only slow down the pace at which 5G technology is deployed.

Unfortunately, Commissioner Rosenworcel’s fervor for advocating the need to reduce the costs of 5G deployment seems animated by the courageous act of a Democratic commissioner decrying the policies of a Republican President and is limited to a context where her voice lacks any power to actually affect policy. Even as she decries trade barriers that would incrementally increase the costs of imported communications hardware, she staunchly opposes FCC proposals that would dramatically reduce the cost of deploying next generation networks.

Given the opportunity to reduce the costs of 5G deployment by a factor far more significant than that by which tariffs will increase them, her preferred role as Democratic commissioner is that of resistance fighter. She acknowledges that “we will need 800,000 of these small cells to stay competitive in 5G” — a number significantly above the “the roughly 280,000 traditional cell towers needed to blanket the nation with 4G”.  Yet, when she has had the opportunity to join the Commission on speeding deployment, she has instead dissented. Party over policy.

In this year’s “Historical Preservation” Order, for example, the Commission voted to expedite deployment on non-Tribal lands, and to exempt small cell deployments from certain onerous review processes under both the National Historic Preservation Act and the National Environmental Policy Act of 1969. Commissioner Rosenworcel dissented from the Order, claiming that that the FCC has “long-standing duties to consult with Tribes before implementing any regulation or policy that will significantly or uniquely affect Tribal governments, their land, or their resources.” Never mind that the FCC engaged in extensive consultation with Tribal governments prior to enacting this Order.

Indeed, in adopting the Order, the Commission found that the Order did nothing to disturb deployment on Tribal lands at all, and affected only the ability of Tribal authorities to reach beyond their borders to require fees and lengthy reviews for small cells on lands in which Tribes could claim merely an “interest.”

According to the Order, the average number of Tribal authorities seeking to review wireless deployments in a given geographic area nearly doubled between 2008 and 2017. During the same period, commenters consistently noted that the fees charged by Tribal authorities for review of deployments increased dramatically.

One environmental consultant noted that fees for projects that he was involved with increased from an average of $2,000.00 in 2011 to $11,450.00 in 2017. Verizon’s fees are $2,500.00 per small cell site just for Tribal review. Of the 8,100 requests that Verizon submitted for tribal review between 2012 and 2015, just 29 ( 0.3%) resulted in a finding that there would be an adverse effect on tribal historic properties. That means that Verizon paid over $20 million to Tribal authorities over that period for historic reviews that resulted in statistically nil action. Along the same lines, Sprint’s fees are so high that it estimates that “it could construct 13,408 new sites for what 10,000 sites currently cost.”

In other words, Tribal review practices — of deployments not on Tribal land — impose a substantial tariff upon 5G deployment, increasing its cost and slowing its pace.

There is a similar story in the Commission’s adoption of, and Commissioner Rosenworcel’s partial dissent from, the recent Wireless Infrastructure Order.  Although Commissioner Rosenworcel offered many helpful suggestions (for instance, endorsing the OTARD proposal that Brent Skorup has championed) and nodded to the power of the market to solve many problems, she also dissented on central parts of the Order. Her dissent shows an unfortunate concern for provincial, political interests and places those interests above the Commission’s mission of ensuring timely deployment of advanced wireless communication capabilities to all Americans.

Commissioner Rosenworcel’s concern about the Wireless Infrastructure Order is that it would prevent state and local governments from imposing fees sufficient to recover costs incurred by the government to support wireless deployments by private enterprise, or from imposing aesthetic requirements on those deployments. Stated this way, her objections seem almost reasonable: surely local government should be able to recover the costs they incur in facilitating private enterprise; and surely local government has an interest in ensuring that private actors respect the aesthetic interests of the communities in which they build infrastructure.

The problem for Commissioner Rosenworcel is that the Order explicitly takes these concerns into account:

[W]e provide guidance on whether and in what circumstances aesthetic requirements violate the Act. This will help localities develop and implement lawful rules, enable providers to comply with these requirements, and facilitate the resolution of disputes. We conclude that aesthetics requirements are not preempted if they are (1) reasonable, (2) no more burdensome than those applied to other types of infrastructure deployments, and (3) objective and published in advance

It neither prohibits localities from recovering costs nor imposing aesthetic requirements. Rather, it requires merely that those costs and requirements be reasonable. The purpose of the Order isn’t to restrict localities from engaging in reasonable conduct; it is to prohibit them from engaging in unreasonable, costly conduct, while providing guidance as to what cost recovery and aesthetic considerations are reasonable (and therefore permissible).

The reality is that localities have a long history of using cost recovery — and especially “soft” or subjective requirements such as aesthetics — to extract significant rents from communications providers. In the 1980s this slowed the deployment and increased the costs of cable television. In the 2000s this slowed the deployment and increase the cost of of fiber-based Internet service. Today this is slowing the deployment and increasing the costs of advanced wireless services. And like any tax — or tariff — the cost is ultimately borne by consumers.

Although we are broadly sympathetic to arguments about local control (and other 10th Amendment-related concerns), the FCC’s goal in the Wireless Infrastructure Order was not to trample upon the autonomy of small municipalities; it was to implement a reasonably predictable permitting process that would facilitate 5G deployment. Those affected would not be the small, local towns attempting to maintain a desirable aesthetic for their downtowns, but large and politically powerful cities like New York City, where the fees per small cell site can be more than $5,000.00 per installation. Such extortionate fees are effectively a tax on smartphone users and others who will utilize 5G for communications. According to the Order, it is estimated that capping these fees would stimulate over $2.4 billion in additional infrastructure buildout, with widespread benefits to consumers and the economy.

Meanwhile, Commissioner Rosenworcel cries “overreach!” “I do not believe the law permits Washington to run roughshod over state and local authority like this,” she said. Her federalist bent is welcome — or it would be, if it weren’t in such stark contrast to her anti-federalist preference for preempting states from establishing rules governing their own internal political institutions when it suits her preferred political objective. We are referring, of course, to Rosenworcel’s support for the previous administration’s FCC’s decision to preempt state laws prohibiting the extension of municipal governments’ broadband systems. The order doing so was plainly illegal from the moment it was passed, as every court that has looked at it has held. That she was ok with. But imposing reasonable federal limits on states’ and localities’ ability to extract political rents by abusing their franchising process is apparently beyond the pale.

Commissioner Rosenworcel is right that the FCC should try to promote market solutions like Brent’s OTARD proposal. And she is also correct in opposing dangerous and destructive tariffs that will increase the cost of telecommunications equipment. Unfortunately, she gets it dead wrong when she supports a stifling regulatory status quo that will surely make it unduly difficult and expensive to deploy next generation networks — not least for those most in need of them. As Chairman Pai noted in his Statement on the Order: “When you raise the cost of deploying wireless infrastructure, it is those who live in areas where the investment case is the most marginal — rural areas or lower-income urban areas — who are most at risk of losing out.”

Reconciling those two positions entails nothing more than pointing to the time-honored Washington tradition of Politics Over Policy. The point is not (entirely) to call out Commissioner Rosenworcel; she’s far from the only person in Washington to make this kind of crass political calculation. In fact, she’s far from the only FCC Commissioner ever to have done so.

One need look no further than the previous FCC Chairman, Tom Wheeler, to see the hypocritical politics of telecommunications policy in action. (And one need look no further than Tom Hazlett’s masterful book, The Political Spectrum: The Tumultuous Liberation of Wireless Technology, from Herbert Hoover to the Smartphone to find a catalogue of its long, sordid history).

Indeed, Larry Downes has characterized Wheeler’s reign at the FCC (following a lengthy recounting of all its misadventures) as having left the agency “more partisan than ever”:

The lesson of the spectrum auctions—one right, one wrong, one hanging in the balance—is the lesson writ large for Tom Wheeler’s tenure at the helm of the FCC. While repeating, with decreasing credibility, that his lodestone as Chairman was simply to encourage “competition, competition, completion” and let market forces do the agency’s work for it, the reality, as these examples demonstrate, has been something quite different.

The Wheeler FCC has instead been driven by a dangerous combination of traditional rent-seeking behavior by favored industry clients, potent pressure from radical advocacy groups and their friends in the White House, and a sincere if misguided desire by Wheeler to father the next generation of network technologies, which quickly mutated from sound policy to empty populism even as technology continued on its own unpredictable path.

* * *

And the Chairman’s increasingly autocratic management style has left the agency more political and more partisan than ever, quick to abandon policies based on sound legal, economic and engineering principles in favor of bait-and-switch proceedings almost certain to do more harm than good, if only unintentionally.

The great irony is that, while Commissioner Rosenworcel’s complaints are backed by a legitimate concern that the Commission has waited far too long to take action on spectrum issues, the criticism should properly fall not upon the current Chair, but — you guessed it — his predecessor, Chairman Wheeler (and his predecessor, Julius Genachowski). Of course, in true partisan fashion, Rosenworcel was fawning in her praise for her political ally’s spectrum agenda, lauding it on more than one occasion as going “to infinity and beyond!”

Meanwhile, Rosenworcel has taken virtually every opportunity to chide and castigate Chairman Pai’s efforts to get more spectrum into the marketplace, most often criticizing them as too little, too slow, and too late. Yet from any objective perspective, the current FCC has been addressing spectrum issues at a breakneck pace, as fast, or faster than any prior Commission. As with spectrum, there is an upper limit to the speed at which federal bureaucracy can work, and Chairman Pai has kept the Commission pushed right up against that limit.

It’s a shame Commissioner Rosenworcel prefers to blame Chairman Pai for the problems she had a hand in creating, and President Trump for problems she has no ability to correct. It’s even more a shame that, having an opportunity to address the problems she so often decries — by working to get more spectrum deployed and put into service more quickly and at lower cost to industry and consumers alike — she prefers to dutifully wear the hat of resistance, instead.

But that’s just politics, we suppose. And like any tariff, it makes us all poorer.

At this point, only the most masochistic and cynical among DC’s policy elite actually desire for the net neutrality conflict to continue. And yet, despite claims that net neutrality principles are critical to protecting consumers, passage of the current Congressional Review Act (“CRA”) disapproval resolution in Congress would undermine consumer protection and promise only to drag out the fight even longer.

The CRA resolution is primarily intended to roll back the FCC’s re-re-classification of broadband as a Title I service under the Communications Act in the Restoring Internet Freedom Order (“RIFO”). The CRA allows Congress to vote to repeal rules recently adopted by federal agencies; upon a successful CRA vote, the rules are rescinded and the agency is prohibited from adopting substantially similar rules in the future.

But, as TechFreedom has noted, it’s not completely clear that a CRA on a regulatory classification decision will work quite the way Congress intends it and could just trigger more litigation cycles, largely because it is unclear what parts of the RIFO are actually “rules” subject to the CRA. Harold Feld has written a critique of TechFreedom’s position, arguing, in effect, that of course the RIFO is a rule; TechFreedom responded with a pretty devastating rejoinder.

But this exchange really demonstrates TechFreedom’s central argument: It is sufficiently unclear how or whether the CRA will apply to the various provisions of the RIFO, such that the only things the CRA is guaranteed to do are 1) to strip consumers of certain important protections — it would take away the FCC’s transparency requirements for ISPs, and imperil privacy protections currently ensured by the FTC — while 2) prolonging the already interminable litigation and political back-and-forth over net neutrality.

The CRA is political theater

The CRA resolution effort is not about good Internet regulatory policy; rather, it’s pure political opportunism ahead of the midterms. Democrats have recognized net neutrality as a good wedge issue because of its low political opportunity cost. The highest-impact costs of over-regulating broadband through classification decisions are hard to see: Rather than bad things happening, the costs arrive in the form of good things not happening. Eventually those costs work their way to customers through higher access prices or less service — especially in rural areas most in need of it — but even these effects take time to show up and, when they do, are difficult to pin on any particular net neutrality decision, including the CRA resolution. Thus, measured in electoral time scales, prolonging net neutrality as a painful political issue — even though actual resolution of the process by legislation would be the sensible course — offers tremendous upside for political challengers and little cost.  

The truth is, there is widespread agreement that net neutrality issues need to be addressed by Congress: A constant back and forth between the FCC (and across its own administrations) and the courts runs counter to the interests of consumers, broadband companies, and edge providers alike. Virtually whatever that legislative solution ends up looking like, it would be an improvement over the unstable status quo.

There have been various proposals from Republicans and Democrats — many of which contain provisions that are likely bad ideas — but in the end, a bill passed with bipartisan input should have the virtue of capturing an open public debate on the issue. Legislation won’t be perfect, but it will be tremendously better than the advocacy playground that net neutrality has become.

What would the CRA accomplish?

Regardless of what one thinks of the substantive merits of TechFreedom’s arguments on the CRA and the arcana of legislative language distinguishing between agency “rules” and “orders,” if the CRA resolution is successful (a prospect that is a bit more likely following the Senate vote to pass it) what follows is pretty clear.

The only certain result of the the CRA resolution becoming law would be to void the transparency provisions that the FCC introduced in the RIFO — the one part of the Order that is pretty clearly a “rule” subject to CRA review — and it would disable the FCC from offering another transparency rule in its place. Everything else is going to end up — surprise! — before the courts, which would serve only to keep the issues surrounding net neutrality unsettled for another several years. (A cynic might suggest that this is, in fact, the goal of net neutrality proponents, for whom net neutrality has been and continues to have important political valence.)

And if the CRA resolution withstands the inevitable legal challenge to its rescision of the rest of the RIFO, it would also (once again) remove broadband privacy from the FTC’s purview, placing it back into the FCC’s lap — which is already prohibited from adopting privacy rules following last year’s successful CRA resolution undoing the Wheeler FCC’s broadband privacy regulations. The result is that we could be left without any broadband privacy regulator at all — presumably not the outcome strong net neutrality proponents want — but they persevere nonetheless.

Moreover, TechFreedom’s argument that the CRA may not apply to all parts of the RIFO could have a major effect on whether or not Congress is even accomplishing anything at all (other than scoring political points) with this vote. It could be the case that the CRA applies only to “rules” and not “orders,” or it could be the case that even if the CRA does apply to the RIFO, its passage would not force the FCC to revive the abrogated 2015 Open Internet Order, as proponents of the CRA vote hope.

Whatever one thinks of these arguments, however, they are based on a sound reading of the law and present substantial enough questions to sustain lengthy court challenges. Thus, far from a CRA vote actually putting to rest the net neutrality issue, it is likely to spawn litigation that will drag out the classification uncertainty question for at least another year (and probably more, with appeals).

Stop playing net neutrality games — they aren’t fun

Congress needs to stop trying to score easy political points on this issue while avoiding the hard and divisive work of reaching a compromise on actual net neutrality legislation. Despite how the CRA is presented in the popular media, a CRA vote is the furthest thing from a simple vote for net neutrality: It’s a political calculation to avoid accountability.

I had the pleasure last month of hosting the first of a new annual roundtable discussion series on closing the rural digital divide through the University of Nebraska’s Space, Cyber, and Telecom Law Program. The purpose of the roundtable was to convene a diverse group of stakeholders — from farmers to federal regulators; from small municipal ISPs to billion dollar app developers — for a discussion of the on-the-ground reality of closing the rural digital divide.

The impetus behind the roundtable was, quite simply, that in my five years living in Nebraska I have consistently found that the discussions that we have here about the digital divide in rural America are wholly unlike those that the federally-focused policy crowd has back in DC. Every conversation I have with rural stakeholders further reinforces my belief that those of us who approach the rural digital divide from the “DC perspective” fail to appreciate the challenges that rural America faces or the drive, innovation, and resourcefulness that rural stakeholders bring to the issue when DC isn’t looking. So I wanted to bring these disparate groups together to see what was driving this disconnect, and what to do about it.

The unfortunate reality of the rural digital divide is that it is an existential concern for much of America. At the same time, the positive news is that closing this divide has become an all-hands-on-deck effort for stakeholders in rural America, one that defies caricatured political, technological, and industry divides. I have never seen as much agreement and goodwill among stakeholders in any telecom community as when I speak to rural stakeholders about digital divides. I am far from an expert in rural broadband issues — and I don’t mean to hold myself out as one — but as I have engaged with those who are, I am increasingly convinced that there are far more and far better ideas about closing the rural digital divide to be found outside the beltway than within.

The practical reality is that most policy discussions about the rural digital divide over the past decade have been largely irrelevant to the realities on the ground: The legal and policy frameworks focus on the wrong things, and participants in these discussions at the federal level rarely understand the challenges that define the rural divide. As a result, stakeholders almost always fall back on advocating stale, entrenched, viewpoints that have little relevance to the on-the-ground needs. (To their credit, both Chairman Pai and Commissioner Carr have demonstrated a longstanding interest in understanding the rural digital divide — an interest that is recognized and appreciated by almost every rural stakeholder I speak to.)

Framing Things Wrong

It is important to begin by recognizing that contemporary discussion about the digital divide is framed in terms of, and addressed alongside, longstanding federal Universal Service policy. This policy, which has its roots in the 20th century project of ensuring that all Americans had access to basic telephone service, is enshrined in the first words of the Communications Act of 1934. It has not significantly evolved from its origins in the analog telephone system — and that’s a problem.

A brief history of Universal Service

The Communications Act established the FCC

for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States … a rapid, efficient, Nation-wide, and world-wide wire and radio communication service ….

The historic goal of “universal service” has been to ensure that anyone in the country is able to connect to the public switched telephone network. In the telephone age, that network provided only one primary last-mile service: transmitting basic voice communications from the customer’s telephone to the carrier’s switch. Once at the switch various other services could be offered — but providing them didn’t require more than a basic analog voice circuit to the customer’s home.

For most of the 20th century, this form of universal service was ensured by fiat and cost recovery. Regulated telephone carriers (that is, primarily, the Bell operating companies under the umbrella of AT&T) were required by the FCC to provide service to all comers, at published rates, no matter the cost of providing that service. In exchange, the carriers were allowed to recover the cost of providing service to high-cost areas through the regulated rates charged to all customers. That is, the cost of ensuring universal service was spread across and subsidized by the entire rate base.

This system fell apart following the break-up of AT&T in the 1980s. The separation of long distance from local exchange service meant that the main form of cross subsidy — from long distance to local callers — could no longer be handled implicitly. Moreover, as competitive exchange services began entering the market, they tended to compete first, and most, over the high-revenue customers who had supported the rate base. To accommodate these changes, the FCC transitioned from a model of implicit cross-subsidies to one of explicit cross-subsidies, introducing long distance access charges and termination fees that were regulated to ensure money continued to flow to support local exchange carriers’ costs of providing services to high-cost users.

The 1996 Telecom Act forced even more dramatic change. The goal of the 1996 Telecom Act was to introduce competition throughout the telecom ecosystem — but the traditional cross-subsidy model doesn’t work in a competitive market. So the 1996 Telecom Act further evolved the FCC’s universal service mechanism, establishing the Universal Service Fund (USF), funded by fees charged to all telecommunications carriers, which would be apportioned to cover the costs incurred by eligible telecommunications carriers in providing high-cost (and other “universal”) services.

The problematic framing of Universal Service

For present purposes, we need not delve into these mechanisms. Rather, the very point of this post is that the interminable debates about these mechanisms — who pays into the USF and how much; who gets paid out of the fund and how much; and what services and technologies the fund covers — simply don’t match the policy challenges of closing the digital divide.

What the 1996 Telecom Act does offer is a statement of the purposes of Universal Service. In 47 USC 254(b)(3), the Act states the purpose of ensuring “Access in rural and high cost areas”:

Consumers in all regions of the Nation, including low-income consumers and those in rural, insular, and high cost areas, should have access to telecommunications and information services … that are reasonably comparable to those services provided in urban areas ….

This is a problematic framing. (I would actually call it patently offensive…). It is a framing that made sense in the telephone era, when ensuring last-mile service meant providing only basic voice telephone service. In that era, having any service meant having all service, and the primary obstacles to overcome were the high-cost of service to remote areas and the lower revenues expected from lower-income areas. But its implicit suggestion is that the goal of federal policy should be to make rural America look like urban America.

Today universal service, at least from the perspective of closing the digital divide, means something different, however. The technological needs of rural America are different than those of urban America; the technological needs of poor and lower-income America are different than those of rich America. Framing the goal in terms of making sure rural and lower-income America have access to the same services as urban and wealthy America is, by definition, not responsive to (or respectful of) the needs of those who are on the wrong side of one of this country’s many digital divides. Indeed, that goal almost certainly distracts from and misallocates resources that could be better leveraged towards closing these divides.

The Demands of Rural Broadband

Rural broadband needs are simultaneously both more and less demanding than the services we typically focus on when discussing universal service. The services that we fund, and the way that we approach how to close digital divides, needs to be based in the first instance on the actual needs of the community that connectivity is meant to serve. Take just two of the prototypical examples: precision and automated farming, and telemedicine.

Assessing rural broadband needs

Precision agriculture requires different networks than does watching Netflix, web surfing, or playing video games. Farms with hundreds or thousands of sensors and other devices per acre can put significant load on networks — but not in terms of bandwidth. The load is instead measured in terms of packets and connections per second. Provisioning networks to handle lots of small packets is very different from provisioning them to handle other, more-typical (to the DC crowd), use cases.

On the other end of the agricultural spectrum, many farms don’t own their own combines. Combines cost upwards of a million dollars. One modern combine is sufficient to tend to several hundred acres in a given farming season. It is common for many farmers to hire someone who owns a combine to service their fields. During harvest season, for instance, one combine service may operate on a dozen farms during harvest season. Prior to operation, modern precision systems need to download a great deal of GIS, mapping, weather, crop, and other data. High-speed Internet can literally mean the difference between letting a combine sit idle for many days of a harvest season while it downloads data and servicing enough fields to cover the debt payments on a million dollar piece of equipment.

Going to the other extreme, rural health care relies upon Internet connectivity — but not in the ways it is usually discussed. The stories one hears on the ground aren’t about the need for particularly high-speed connections or specialized low-latency connections to allow remote doctors to control surgical robots. While tele-surgery and access to highly specialized doctors are important applications of telemedicine, the urgent needs today are far more modest: simple video consultations with primary care physicians for routine care, requiring only a moderate-speed Internet connection capable of basic video conferencing. In reality, literally megabits per second (not even 10 mbps) can mean the difference between a remote primary care physician being able to provide basic health services to a rural community and that community going entirely unserved by a doctor.

Efforts to run gigabit connections and dedicated fiber to rural health care facilities may be a great long-term vision — but the on-the-ground need could be served by a reliable 4G wireless connection or DSL line. (Again, to their credit, this is a point that Chairman Pai and Commissioner Carr have been highlighting in their recent travels through rural parts of the country.)

Of course, rural America faces many of the same digital divides faced elsewhere. Even in the wealthiest cities in Nebraska, for instance, significant numbers of students are eligible for free or reduced price school lunches — a metric that corresponds with income — and rely on anchor institutions for Internet access. The problem is worse in much of rural Nebraska, where there may simply be no Internet access at all.

Addressing rural broadband needs

Two things in particular have struck me as I have spoken to rural stakeholders about the digital divide. The first is that this is an “all hands on deck” problem. Everyone I speak to understands the importance of the issue. Everyone is willing to work with and learn from others. Everyone is willing to commit resources and capital to improve upon the status quo, including by undertaking experiments and incurring risks.

The discussions I have in DC, however, including with and among key participants in the DC policy firmament, are fundamentally different. These discussions focus on tweaking contribution factors and cost models to protect or secure revenues; they are, in short, missing the forest for the trees. Meanwhile, the discussion on the ground focuses on how to actually deploy service and overcome obstacles. No amount of cost-model tweaking will do much at all to accomplish either of these.

The second striking, and rather counterintuitive, thing that I have often heard is that closing the rural digital divide isn’t (just) about money. I’ve heard several times the lament that we need to stop throwing more money at the problem and start thinking about where the money we already have needs to go. Another version of this is that it isn’t about the money, it’s about the business case. Money can influence a decision whether to execute upon a project for which there is a business case — but it rarely creates a business case where there isn’t one. And where it has created a business case, that case was often for building out relatively unimportant networks while increasing the opportunity costs of building out more important networks. The networks we need to build are different from those envisioned by the 1996 Telecom Act or FCC efforts to contort that Act to fund Internet build-out.

Rural Broadband Investment

There is, in fact, a third particularly striking thing I have gleaned from speaking with rural stakeholders, and rural providers in particular: They don’t really care about net neutrality, and don’t see it as helpful to closing the digital divide.  

Rural providers, it must be noted, are generally “pro net neutrality,” in the sense that they don’t think that ISPs should interfere with traffic going over their networks; in the sense that they don’t have any plans themselves to engage in “non-neutral” conduct; and also in the sense that they don’t see a business case for such conduct.

But they are also wary of Title II regulation, or of other rules that are potentially burdensome or that introduce uncertainty into their business. They are particularly concerned that Title II regulation opens the door to — and thus creates significant uncertainty about the possibility of — other forms of significant federal regulation of their businesses.

More than anything else, they want to stop thinking, talking, and worrying about net neutrality regulations. Ultimately, the past decade of fights about net neutrality has meant little other than regulatory cost and uncertainty for them, which makes planning and investment difficult — hardly a boon to closing the digital divide.

The basic theory of the Wheeler-era FCC’s net neutrality regulations was the virtuous cycle — that net neutrality rules gave edge providers the certainty they needed in order to invest in developing new applications that, in turn, would drive demand for, and thus buildout of, new networks. But carriers need certainty, too, if they are going to invest capital in building these networks. Rural ISPs are looking for the business case to justify new builds. Increasing uncertainty has only negative effects on the business case for closing the rural digital divide.

Most crucially, the logic of the virtuous cycle is virtually irrelevant to driving demand for closing the digital divide. Edge innovation isn’t going to create so much more value that users will suddenly demand that networks be built; rather, the applications justifying this demand already exist, and most have existed for many years. What stands in the way of the build-out required to service under- or un-served rural areas is the business case for building these (expensive) networks. And the uncertainty and cost associated with net neutrality only exacerbate this problem.

Indeed, rural markets are an area where the virtuous cycle very likely turns in the other direction. Rural communities are actually hotbeds of innovation. And they know their needs far better than Silicon Valley edge companies, so they are likely to build apps and services that better cater to the unique needs of rural America. But these apps and services aren’t going to be built unless their developers have access to the broadband connections needed to build and maintain them, and, most important of all, unless users have access to the broadband connections needed to actually make use of them. The upshot is that, in rural markets, connectivity precedes and drives the supply of edge services not, as the Wheeler-era virtuous cycle would have it, the other way around.

The effect of Washington’s obsession with net neutrality these past many years has been to increase uncertainty and reduce the business case for building new networks. And its detrimental effects continue today with politicized and showboating efforts to to invoke the Congressional Review Act in order to make a political display of the 2017 Restoring Internet Freedom Order. Back in the real world, however, none of this helps to provide rural communities with the type of broadband services they actually need, and the effect is only to worsen the rural digital divide, both politically and technologically.

The Road Ahead …?

The story told above is not a happy one. Closing digital divides, and especially closing the rural digital divide, is one of the most important legal, social, and policy challenges this country faces. Yet the discussion about these issues in DC reflects little of the on-the-ground reality. Rather advocates in DC attack a strawman of the rural digital divide, using it as a foil to protect and advocate for their pet agendas. If anything, the discussion in DC distracts attention and diverts resources from productive ideas.

To end on a more positive note, some are beginning to recognize the importance and direness of the situation. I have noted several times the work of Chairman Pai and Commissioner Carr. Indeed, the first time I met Chairman Pai was when I had the opportunity to accompany him, back when he was Commissioner Pai, on a visit through Diller, Nebraska (pop. 287). More recently, there has been bipartisan recognition of the need for new thinking about the rural digital divide. In February, for instance, a group of Democratic senators asked President Trump to prioritize rural broadband in his infrastructure plans. And the following month Congress enacted, and the President signed, legislation that among other things funded a $600 million pilot program to award grants and loans for rural broadband built out through the Department of Agriculture’s Rural Utilities Service. But both of these efforts rely too heavily on throwing money at the rural divide (speaking of the recent legislation, the head of one Nebraska-based carrier building out service in rural areas lamented that it’s just another effort to give carriers cheap money, which doesn’t do much to help close the divide!). It is, nonetheless, good to see urgent calls for and an interest in experimenting with new ways to deliver assistance in closing the rural digital divide. We need more of this sort of bipartisan thinking and willingness to experiment with new modes of meeting this challenge — and less advocacy for stale, entrenched, viewpoints that have little relevance to the on-the-ground reality of rural America.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

Over the weekend, Senator Al Franken and FCC Commissioner Mignon Clyburn issued an impassioned statement calling for the FCC to thwart the use of mandatory arbitration clauses in ISPs’ consumer service agreements — starting with a ban on mandatory arbitration of privacy claims in the Chairman’s proposed privacy rules. Unfortunately, their call to arms rests upon a number of inaccurate or weak claims. Before the Commissioners vote on the proposed privacy rules later this week, they should carefully consider whether consumers would actually be served by such a ban.

FCC regulations can’t override congressional policy favoring arbitration

To begin with, it is firmly cemented in Supreme Court precedent that the Federal Arbitration Act (FAA) “establishes ‘a liberal federal policy favoring arbitration agreements.’” As the Court recently held:

[The FAA] reflects the overarching principle that arbitration is a matter of contract…. [C]ourts must “rigorously enforce” arbitration agreements according to their terms…. That holds true for claims that allege a violation of a federal statute, unless the FAA’s mandate has been “overridden by a contrary congressional command.”

For better or for worse, that’s where the law stands, and it is the exclusive province of Congress — not the FCC — to change it. Yet nothing in the Communications Act (to say nothing of the privacy provisions in Section 222 of the Act) constitutes a “contrary congressional command.”

And perhaps that’s for good reason. In enacting the statute, Congress didn’t demonstrate the same pervasive hostility toward companies and their relationships with consumers that has characterized the way this FCC has chosen to enforce the Act. As Commissioner O’Rielly noted in dissenting from the privacy NPRM:

I was also alarmed to see the Commission acting on issues that should be completely outside the scope of this proceeding and its jurisdiction. For example, the Commission seeks comment on prohibiting carriers from including mandatory arbitration clauses in contracts with their customers. Here again, the Commission assumes that consumers don’t understand the choices they are making and is willing to impose needless costs on companies by mandating how they do business.

If the FCC were to adopt a provision prohibiting arbitration clauses in its privacy rules, it would conflict with the FAA — and the FAA would win. Along the way, however, it would create a thorny uncertainty for both companies and consumers seeking to enforce their contracts.  

The evidence suggests that arbitration is pro-consumer

But the lack of legal authority isn’t the only problem with the effort to shoehorn an anti-arbitration bias into the Commission’s privacy rules: It’s also bad policy.

In its initial broadband privacy NPRM, the Commission said this about mandatory arbitration:

In the 2015 Open Internet Order, we agreed with the observation that “mandatory arbitration, in particular, may more frequently benefit the party with more resources and more understanding of the dispute procedure, and therefore should not be adopted.” We further discussed how arbitration can create an asymmetrical relationship between large corporations that are repeat players in the arbitration system and individual customers who have fewer resources and less experience. Just as customers should not be forced to agree to binding arbitration and surrender their right to their day in court in order to obtain broadband Internet access service, they should not have to do so in order to protect their private information conveyed through that service.

The Commission may have “agreed with the cited observations about arbitration, but that doesn’t make those views accurate. As one legal scholar has noted, summarizing the empirical data on the effects of arbitration:

[M]ost of the methodologically sound empirical research does not validate the criticisms of arbitration. To give just one example, [employment] arbitration generally produces higher win rates and higher awards for employees than litigation.

* * *

In sum, by most measures — raw win rates, comparative win rates, some comparative recoveries and some comparative recoveries relative to amounts claimed — arbitration generally produces better results for claimants [than does litigation].

A comprehensive, empirical study by Northwestern Law’s Searle Center on AAA (American Arbitration Association) cases found much the same thing, noting in particular that

  • Consumer claimants in arbitration incur average arbitration fees of only about $100 to arbitrate small (under $10,000) claims, and $200 for larger claims (up to $75,000).
  • Consumer claimants also win attorneys’ fees in over 60% of the cases in which they seek them.
  • On average, consumer arbitrations are resolved in under 7 months.
  • Consumers win some relief in more than 50% of cases they arbitrate…
  • And they do almost exactly as well in cases brought against “repeat-player” business.

In short, it’s extremely difficult to sustain arguments suggesting that arbitration is tilted against consumers relative to litigation.

(Upper) class actions: Benefitting attorneys — and very few others

But it isn’t just any litigation that Clyburn and Franken seek to preserve; rather, they are focused on class actions:

If you believe that you’ve been wronged, you could take your service provider to court. But you’d have to find a lawyer willing to take on a multi-national telecom provider over a few hundred bucks. And even if you won the case, you’d likely pay more in legal fees than you’d recover in the verdict.

The only feasible way for you as a customer to hold that corporation accountable would be to band together with other customers who had been similarly wronged, building a case substantial enough to be worth the cost—and to dissuade that big corporation from continuing to rip its customers off.

While — of course — litigation plays an important role in redressing consumer wrongs, class actions frequently don’t confer upon class members anything close to the imagined benefits that plaintiffs’ lawyers and their congressional enablers claim. According to a 2013 report on recent class actions by the law firm, Mayer Brown LLP, for example:

  • “In [the] entire data set, not one of the class actions ended in a final judgment on the merits for the plaintiffs. And none of the class actions went to trial, either before a judge or a jury.” (Emphasis in original).
  • “The vast majority of cases produced no benefits to most members of the putative class.”
  • “For those cases that do settle, there is often little or no benefit for class members. What is more, few class members ever even see those paltry benefits — particularly in consumer class actions.”
  • “The bottom line: The hard evidence shows that class actions do not provide class members with anything close to the benefits claimed by their proponents, although they can (and do) enrich attorneys.”

Similarly, a CFPB study of consumer finance arbitration and litigation between 2008 and 2012 seems to indicate that the class action settlements and judgments it studied resulted in anemic relief to class members, at best. The CFPB tries to disguise the results with large, aggregated and heavily caveated numbers (never once actually indicating what the average payouts per person were) that seem impressive. But in the only hard numbers it provides (concerning four classes that ended up settling in 2013), promised relief amounted to under $23 each (comprising both cash and in-kind payment) if every class member claimed against the award. Back-of-the-envelope calculations based on the rest of the data in the report suggest that result was typical.

Furthermore, the average time to settlement of the cases the CFPB looked at was almost 2 years. And somewhere between 24% and 37% involved a non-class settlement — meaning class members received absolutely nothing at all because the named plaintiff personally took a settlement.

By contrast, according to the Searle Center study, the average award in the consumer-initiated arbitrations it studied (admittedly, involving cases with a broader range of claims) was almost $20,000, and the average time to resolution was less than 7 months.

To be sure, class action litigation has been an important part of our system of justice. But, as Arthur Miller — a legal pioneer who helped author the rules that make class actions viable — himself acknowledged, they are hardly a panacea:

I believe that in the 50 years we have had this rule, that there are certain class actions that never should have been brought, admitted; that we have burdened our judiciary, yes. But we’ve had a lot of good stuff done. We really have.

The good that has been done, according to Professor Miller, relates in large part to the civil rights violations of the 50’s and 60’s, which the class action rules were designed to mitigate:

Dozens and dozens and dozens of communities were desegregated because of the class action. You even see desegregation decisions in my old town of Boston where they desegregated the school system. That was because of a class action.

It’s hard to see how Franken and Clyburn’s concern for redress of “a mysterious 99-cent fee… appearing on your broadband bill” really comes anywhere close to the civil rights violations that spawned the class action rules. Particularly given the increasingly pervasive role of the FCC, FTC, and other consumer protection agencies in addressing and deterring consumer harms (to say nothing of arbitration itself), it is manifestly unclear why costly, protracted litigation that infrequently benefits anyone other than trial attorneys should be deemed so essential.

“Empowering the 21st century [trial attorney]”

Nevertheless, Commissioner Clyburn and Senator Franken echo the privacy NPRM’s faulty concerns about arbitration clauses that restrict consumers’ ability to litigate in court:

If you’re prohibited from using our legal system to get justice when you’re wronged, what’s to protect you from being wronged in the first place?

Well, what do they think the FCC is — chopped liver?

Hardly. In fact, it’s a little surprising to see Commissioner Clyburn (who sits on a Commission that proudly proclaims that “[p]rotecting consumers is part of [its] DNA”) and Senator Franken (among Congress’ most vocal proponents of the FCC’s claimed consumer protection mission) asserting that the only protection for consumers from ISPs’ supposed depredations is the cumbersome litigation process.

In fact, of course, the FCC has claimed for itself the mantle of consumer protector, aimed at “Empowering the 21st Century Consumer.” But nowhere does the agency identify “promoting and preserving the rights of consumers to litigate” among its tools of consumer empowerment (nor should it). There is more than a bit of irony in a federal regulator — a commissioner of an agency charged with making sure, among other things, that corporations comply with the law — claiming that, without class actions, consumers are powerless in the face of bad corporate conduct.

Moreover, even if it were true (it’s not) that arbitration clauses tend to restrict redress of consumer complaints, effective consumer protection would still not necessarily be furthered by banning such clauses in the Commission’s new privacy rules.

The FCC’s contemplated privacy regulations are poised to introduce a wholly new and untested regulatory regime with (at best) uncertain consequences for consumers. Given the risk of consumer harm resulting from the imposition of this new regime, as well as the corollary risk of its excessive enforcement by complainants seeking to test or push the boundaries of new rules, an agency truly concerned with consumer protection would tread carefully. Perhaps, if the rules were enacted without an arbitration ban, it would turn out that companies would mandate arbitration (though this result is by no means certain, of course). And perhaps arbitration and agency enforcement alone would turn out to be insufficient to effectively enforce the rules. But given the very real costs to consumers of excessive, frivolous or potentially abusive litigation, cabining the litigation risk somewhat — even if at first it meant the regime were tilted slightly too much against enforcement — would be the sensible, cautious and pro-consumer place to start.

____

Whether rooted in a desire to “protect” consumers or not, the FCC’s adoption of a rule prohibiting mandatory arbitration clauses to address privacy complaints in ISP consumer service agreements would impermissibly contravene the FAA. As the Court has made clear, such a provision would “‘stand[] as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress’ embodied in the Federal Arbitration Act.” And not only would such a rule tend to clog the courts in contravention of the FAA’s objectives, it would do so without apparent benefit to consumers. Even if such a rule wouldn’t effectively be invalidated by the FAA, the Commission should firmly reject it anyway: A rule that operates primarily to enrich class action attorneys at the expense of their clients has no place in an agency charged with protecting the public interest.