Archives For network management

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

Just in time for tomorrow’s FCC vote on repeal of its order classifying Internet Service Providers as common carriers, the St. Louis Post-Dispatch has published my op-ed entitled The FCC Should Abandon Title II and Return to Antitrust.

Here’s the full text:

The Federal Communications Commission (FCC) will soon vote on whether to repeal an Obama-era rule classifying Internet Service Providers (ISPs) as “common carriers.” That rule was put in place to achieve net neutrality, an attractive-sounding goal that many Americans—millennials especially—reflexively support.

In Missouri, voices as diverse as the St. Louis Post-Dispatch, the Joplin Globe, and the Archdiocese of St. Louis have opposed repeal of the Obama-era rule.

Unfortunately, few people who express support for net neutrality understand all it entails. Even fewer recognize the significant dangers of pursuing net neutrality using the means the Obama-era FCC selected. All many know is that they like neutrality generally and that smart-sounding celebrities like John Oliver support the Obama-era rule. They really need to know more.

First, it’s important to understand what a policy of net neutrality entails. In essence, it prevents ISPs from providing faster or better transmission of some Internet content, even where the favored content provider is willing to pay for prioritization.

That sounds benign—laudable, even—until one considers all that such a policy prevents. Under strict net neutrality, an ISP couldn’t prioritize content transmission in which congestion delays ruin the user experience (say, an Internet videoconference between a telemedicine system operated by the University of Missouri hospital and a rural resident of Dent County) over transmissions in which delays are less detrimental (say, downloads from a photo-sharing site).
Strict net neutrality would also preclude a mobile broadband provider from exempting popular content providers from data caps. Indeed, T-Mobile was hauled before the FCC to justify its popular “Binge On” service, which offered cost-conscious subscribers unlimited access to Netflix, ESPN, and HBO.

The fact is, ISPs have an incentive to manage their traffic in whatever way most pleases subscribers. The vast majority of Americans have a choice of ISPs, so managing content in any manner that adversely affects the consumer experience would hurt business. ISPs are also motivated to design subscription packages that consumers most desire. They shouldn’t have to seek government approval of innovative offerings.

For evidence that competition protects consumers from harmful instances of non-neutral network management, consider the record. The commercial Internet was born, thrived, and became the brightest spot in the American economy without formal net neutrality rules. History provides little reason to believe that the parade of horribles net neutrality advocates imagine will ever materialize.

Indeed, in seeking to justify its net neutrality policies, the Obama era FCC could come up with only four instances of harmful non-neutral network management over the entire history of the commercial Internet. That should come as no surprise. Background antitrust rules, in place long before the Internet was born, forbid the speculative harms net neutrality advocates envision.

Even if net neutrality regulation were desirable as a policy matter, the means by which the FCC secured it was entirely inappropriate. Before it adopted the current approach, which reclassified ISPs as common carriers subject to Title II of the 1934 Communications Act, the FCC was crafting a narrower approach using authority granted by the 1996 Telecommunications Act.

It abruptly changed course after President Obama, reeling from a shellacking in the 2014 midterm elections, sought to shore up his base by posting a video calling for “the strongest possible rules” on net neutrality, including Title II reclassification. Prodded by the President, the supposedly independent commissioners abandoned their consensus that Title II was too extreme and voted along party lines to treat the Internet as a utility.

Title II reclassification has resulted in the sort of “Mother, may I?” regulatory approach that impedes innovation and investment. In the first half of 2015, as the Commission was formulating its new Title II approach, spending by ISPs on capital equipment fell by an average of 8%. That was only the third time in the history of the commercial Internet that infrastructure investment fell from the previous year. The other two times were in 2001, following the dot.com bust, and 2009, after the 2008 financial crash and ensuing recession. For those remote communities in Missouri still looking for broadband to reach their doorsteps, government policies need to incentivize more investment, not restrict it.

To enhance innovation and encourage broadband deployment, the FCC should reverse its damaging Title II order and leave concerns about non-neutral network management to antitrust law. It was doing just fine.

As the Federal Communications (FCC) prepares to revoke its economically harmful “net neutrality” order and replace it with a free market-oriented “Restoring Internet Freedom Order,” the FCC and the Federal Trade Commission (FTC) commendably have announced a joint policy for cooperation on online consumer protection.  According to a December 11 FTC press release:

The Federal Trade Commission and Federal Communications Commission (FCC) announced their intent to enter into a Memorandum of Understanding (MOU) under which the two agencies would coordinate online consumer protection efforts following the adoption of the Restoring Internet Freedom Order.

“The Memorandum of Understanding will be a critical benefit for online consumers because it outlines the robust process by which the FCC and FTC will safeguard the public interest,” said FCC Chairman Ajit Pai. “Instead of saddling the Internet with heavy-handed regulations, we will work together to take targeted action against bad actors. This approach protected a free and open Internet for many years prior to the FCC’s 2015 Title II Order and it will once again following the adoption of the Restoring Internet Freedom Order.”

“The FTC is committed to ensuring that Internet service providers live up to the promises they make to consumers,” said Acting FTC Chairman Maureen K. Ohlhausen. “The MOU we are developing with the FCC, in addition to the decades of FTC law enforcement experience in this area, will help us carry out this important work.”

The draft MOU, which is being released today, outlines a number of ways in which the FCC and FTC will work together to protect consumers, including:

The FCC will review informal complaints concerning the compliance of Internet service providers (ISPs) with the disclosure obligations set forth in the new transparency rule. Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization, and congestion management. Should an ISP fail to make the required disclosures—either in whole or in part—the FCC will take enforcement action.

The FTC will investigate and take enforcement action as appropriate against ISPs concerning the accuracy of those disclosures, as well as other deceptive or unfair acts or practices involving their broadband services.

The FCC and the FTC will broadly share legal and technical expertise, including the secure sharing of informal complaints regarding the subject matter of the Restoring Internet Freedom Order. The two agencies also will collaborate on consumer and industry outreach and education.

The FCC’s proposed Restoring Internet Freedom Order, which the agency is expected to vote on at its December 14 meeting, would reverse a 2015 agency decision to reclassify broadband Internet access service as a Title II common carrier service. This previous decision stripped the FTC of its authority to protect consumers and promote competition with respect to Internet service providers because the FTC does not have jurisdiction over common carrier activities.

The FCC’s Restoring Internet Freedom Order would return jurisdiction to the FTC to police the conduct of ISPs, including with respect to their privacy practices. Once adopted, the order will also require broadband Internet access service providers to disclose their network management practices, performance, and commercial terms of service. As the nation’s top consumer protection agency, the FTC will be responsible for holding these providers to the promises they make to consumers.

Particularly noteworthy is the suggestion that the FCC and FTC will work to curb regulatory duplication and competitive empire building – a boon to Internet-related businesses that would be harmed by regulatory excess and uncertainty.  Stay tuned for future developments.

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

The FCC’s proposed “Open Internet Order,” which would impose heavy-handed “common carrier” regulation of Internet service providers (the Order is being appealed in federal court and there are good arguments for striking it down) in order to promote “net neutrality,” is fundamentally misconceived.  If upheld, it will slow innovation, impose substantial costs, and harm consumers (see Heritage Foundation commentaries on FCC Internet regulation here, here, here, and here).  What’s more, it is not needed to protect consumers and competition from potential future abuse by Internet firms.  As I explain in a Heritage Foundation Legal Memorandum published yesterday, should the Open Internet Order be struck down, the U.S. Federal Trade Commission (FTC) has ample authority under Section 5 of the Federal Trade Commission Act (FTC Act) to challenge any harmful conduct by entities involved in Internet broadband services markets when such conduct undermines competition or harms consumers.

Section 5 of the FTC Act authorizes the FTC to prevent persons, partnerships, or corporations from engaging in “unfair methods of competition” or “unfair or deceptive acts or practices” in or affecting commerce.  This gives it ample authority to challenge Internet abuses raising antitrust (unfair methods) and consumer protection (unfair acts or practices) issues.

On the antitrust side, in evaluating individual business restraints under a “rule of reason,” the FTC relies on objective fact-specific analyses of the actual economic and consumer protection implications of a particular restraint.  Thus, FTC evaluations of broadband industry restrictions are likely to be more objective and predictable than highly subjective “public interest” assessments by the FCC, leading to reduced error and lower planning costs for purveyors of broadband and related services.  Appropriate antitrust evaluation should accord broad leeway to most broadband contracts.  As FTC Commissioner Josh Wright put it in testifying before Congress, “fundamental observation and market experience [demonstrate] that the business practices at the heart of the net neutrality debate are generally procompetitive.”  This suggests application of a rule of reason that will fully weigh efficiencies but not shy away from challenging broadband-related contractual arrangements that undermine the competitive process.

On the consumer protection side, the FTC can attack statements made by businesses that mislead and thereby impose harm on consumers (including business purchasers) who are acting reasonably.  It can also challenge practices that, though not literally false or deceptive, impose substantial harm on consumers (including business purchasers) that they cannot reasonably avoid, assuming the harm is greater than any countervailing benefits.  These are carefully designed and cabined sources of authority that require the FTC to determine the presence of actual consumer harm before acting.  Application of the FTC’s unfairness and deception powers therefore lacks the uncertainty associated with the FCC’s uncabined and vague “public interest” standard of evaluation.  As in the case of antitrust, the existence of greater clarity and a well-defined analytic methodology suggests that reliance on FTC rather than FCC enforcement in this area is preferable from a policy standpoint.

Finally, arguments for relying on FTC Internet policing are based on experience as well – the FTC is no Internet policy novice.  It closely monitors Internet activity and, over the years, it has developed substantial expertise in Internet topics through research, hearings, and enforcement actions.

Most recently, for example, the FTC sued AT&T in federal court for allegedly slowing wireless customers’ Internet speeds, although the customers had subscribed to “unlimited” data usage plans.  The FTC asserted that in offering renewals to unlimited-plan customers, AT&T did not adequately inform them of a new policy to “throttle” (drastically reduce the speed of) customer data service once a certain monthly data usage cap was met. The direct harm of throttling was in addition to the high early termination fees that dissatisfied customers would face for early termination of their services.  The FTC characterized this behavior as both “unfair” and “deceptive.”  Moreover, the commission claimed that throttling-related speed reductions and data restrictions were not determined by real-time network congestion and thus did not even qualify as reasonable network management activity.  This case illustrates that the FTC is perfectly capable of challenging potential “network neutrality” violations that harm consumer welfare (since “throttled” customers are provided service that is inferior to the service afforded customers on “tiered” service plans) and thus FCC involvement is unwarranted.

In sum, if a court strikes down the latest FCC effort to regulate the Internet, the FTC has ample authority to address competition and consumer protection problems in the area of broadband, including questions related to net neutrality.  The FTC’s highly structured, analytic, fact-based approach to these issues is superior to FCC net neutrality regulation based on vague and unfocused notions of the public interest.  If a court does not act, Congress might wish to consider legislation to prohibit FCC Internet regulation and leave oversight of potential competitive and consumer abuses to the FTC.

Last week a group of startup investors wrote a letter to protest what they assume FCC Chairman Tom Wheeler’s proposed, revised Open Internet NPRM will say.

Bear in mind that an NPRM is a proposal, not a final rule, and its issuance starts a public comment period. Bear in mind, as well, that the proposal isn’t public yet, presumably none of the signatories to this letter has seen it, and the devil is usually in the details. That said, the letter has been getting a lot of press.

I found the letter seriously wanting, and seriously disappointing. But it’s a perfect example of what’s so wrong with this interminable debate on net neutrality.

Below I reproduce the letter in full, in quotes, with my comments interspersed. The key take-away: Neutrality (or non-discrimination) isn’t what’s at stake here. What’s at stake is zero-cost access by content providers to broadband networks. One can surely understand why content providers and those who fund them want their costs of doing business to be lower. But the rhetoric of net neutrality is mismatched with this goal. It’s no wonder they don’t just come out and say it – it’s quite a remarkable claim.

Open Internet Investors Letter

The Honorable Tom Wheeler, Chairman
Federal Communications Commission
445 12th Street, SW
Washington D.C. 20554

May 8, 2014

Dear Chairman Wheeler:

We write to express our support for a free and open Internet.

We invest in entrepreneurs, investing our own funds and those of our investors (who are individuals, pension funds, endowments, and financial institutions).  We often invest at the earliest stages, when companies include just a handful of founders with largely unproven ideas. But, without lawyers, large teams or major revenues, these small startups have had the opportunity to experiment, adapt, and grow, thanks to equal access to the global market.

“Equal” access has nothing to do with it. No startup is inherently benefitted by being “equal” to others. Maybe this is just careless drafting. But frankly, as I’ll discuss, there are good reasons to think (contra the pro-net neutrality narrative) that startups will be helped by inequality (just like contra the (totally wrong) accepted narrative, payola helps new artists). It says more than they would like about what these investors really want that they advocate “equality” despite the harm it may impose on startups (more on this later).

Presumably what “equal” really means here is “zero cost”: “As long as my startup pays nothing for access to ISPs’ subscribers, it’s fine that we’re all on equal footing.” Wheeler has stated his intent that his proposal would require any prioritization to be available to any who want it, on equivalent, commercially reasonable terms. That’s “equal,” too, so what’s to complain about? But it isn’t really inequality that’s gotten everyone so upset.

Of course, access is never really “zero cost;” start-ups wouldn’t need investors if their costs were zero. In that sense, why is equality of ISP access any more important than other forms of potential equality? Why not mandate price controls on rent? Why not mandate equal rent? A cost is a cost. What’s really going on here is that, like Netflix, these investors want to lower their costs and raise their returns as much as possible, and they want the government to do it for them.

As a result, some of the startups we have invested in have managed to become among the most admired, successful, and influential companies in the world.

No startup became successful as a result of “equality” or even zero-cost access to broadband. No doubt some of their business models were predicated on that assumption. But it didn’t cause their success.

We have made our investment decisions based on the certainty of a level playing field and of assurances against discrimination and access fees from Internet access providers.

And they would make investment decisions based on the possibility of an un-level playing field if that were the status quo. More importantly, the businesses vying for investment dollars might be different ones if they built their business models in a different legal/economic environment. So what? This says nothing about the amount of investment, the types of businesses, the quality of businesses that would arise under a different set of rules. It says only that past specific investments might not have been made.

Unless the contention is that businesses would be systematically worse under a different rule, this is irrelevant. I have seen that claim made, and it’s implicit here, of course, but I’ve seen no evidence to actually support it. Businesses thrive in unequal, cost-ladened environments all the time. It costs about $4 million/30 seconds to advertise during the Super Bowl. Budweiser and PepsiCo paid multiple millions this year to do so; many of their competitors didn’t. With inequality like that, it’s a wonder Sierra Nevada and Dr. Pepper haven’t gone bankrupt.

Indeed, our investment decisions in Internet companies are dependent upon the certainty of an equal-opportunity marketplace.

Again, no they’re not. Equal opportunity is a euphemism for zero cost, or else this is simply absurd on its face. Are these investors so lacking in creativity and ability that they can invest only when there is certainty of equal opportunity? Don’t investors thrive – aren’t they most needed – in environments where arbitrage is possible, where a creative entrepreneur can come up with a risky, novel way to take advantage of differential conditions better than his competitors? Moreover, the implicit equating of “equal-opportunity marketplace” with net neutrality rules is far-fetched. Is that really all that matters?

This is a good time to make a point that is so often missed: The loudest voices for net neutrality are the biggest companies – Google, Netflix, Amazon, etc. That fact should give these investors and everyone else serious pause. Their claim rests on the idea that “equality” is needed, so big companies can’t use an Internet “fast lane” to squash them. Google is decidedly a big company. So why do the big boys want this so much?

The battle is often pitched as one of ISPs vs. (small) content providers. But content providers have far less to worry about and face far less competition from broadband providers than from big, incumbent competitors. It is often claimed that “Netflix was able to pay Comcast’s toll, but a small startup won’t have that luxury.” But Comcast won’t even notice or care about a small startup; its traffic demands will be inconsequential. Netflix can afford to pay for Internet access for precisely the same reason it came to Comcast’s attention: It’s hugely successful, and thus creates a huge amount of traffic.

Based on news reports and your own statements, we are worried that your proposed rules will not provide the necessary certainty that we need to make investment decisions and that these rules will stifle innovation in the Internet sector.

Now, there’s little doubt that legal certainty aids investment decisions. But “certainty” is not in danger here. The rules have to change because the court said so – with pretty clear certainty. And a new rule is not inherently any more or less likely to offer certainty than the previous Open Internet Order, which itself was subject to intense litigation (obviously) and would have been subject to interpretation and inconsistent enforcement (and would have allowed all kinds of paid prioritization, too!). Certainty would be good, but Wheeler’s proposed rule won’t likely do anything about the amount of certainty one way or the other.

If established companies are able to pay for better access speeds or lower latency, the Internet will no longer be a level playing field. Start-ups with applications that are advantaged by speed (such as games, video, or payment systems) will be unlikely to overcome that deficit no matter how innovative their service.

Again, it’s notable that some of the strongest advocates for net neutrality are established companies. Another letter sent out last week included signatures from a bunch of startups, but also Google, Microsoft, Facebook and Yahoo!, among others.

In truth it’s hard to see why startup investors would think this helps them. Non-neutrality offers the prospect that a startup might be able to buy priority access to overcome the inherent disadvantage of newness, and to better compete with an established company. Neutrality means that that competitive advantage is impossible, and the baseline relative advantages and disadvantages remain – which helps incumbents, not startups. With a neutral Internet – well, the advantages of the incumbent competitor can’t be dissipated by a startup buying a favorable leg-up in speed and the Netflix’s of the world will be more likely to continue to dominate.

Of course the claim is that incumbents will use their huge resources to gain even more advantage with prioritized access. Implicit in this must be the assumption that the advantage that could be gained by a startup buying priority offers less return for the startup than the cost imposed on it by the inherent disadvantages of reputation, brand awareness, customer base, etc. But that’s not plausible for all or even most startups. And investors exist precisely because they are able to provide funds for which there is a likelihood of a good return – so if paying for priority would help overcome inherent disadvantages, there would be money for it.

Also implicit is the claim that the benefits to incumbents (over and above their natural advantages) from paying for priority, in terms of hamstringing new entrants, will outweigh the cost. This is unlikely generally to be true, as well. They already have advantages. Sure, sometimes they might want to pay for more, but in precisely the cases where it would be worth it to do so, the new entrant would also be most benefited by doing so itself – ensuring, again, that investment funds will be available.

Of course if both incumbents and startups decide paying for priority is better, we’re back to a world of “equality,” so what’s to complain about, based on this letter? This puts into stark relief that what these investors really want is government-mandated, subsidized broadband access, not “equality.”

Now, it’s conceivable that that is the optimal state of affairs, but if it is, it isn’t for the reasons given here, nor has anyone actually demonstrated that it is the case.

Entrepreneurs will need to raise money to buy fast lane services before they have proven that consumers want their product. Investors will extract more equity from entrepreneurs to compensate for the risk.

Internet applications will not be able to afford to create a relationship with millions of consumers by making their service freely available and then build a business over time as they better understand the value consumers find in their service (which is what Facebook, Twitter, Tumblr, Pinterest, Reddit, Dropbox and virtually other consumer Internet service did to achieve scale).

In other words: “Subsidize us. We’re worth it.” Maybe. But this is probably more revealing than intended. The Internet cost something to someone to build. (Actually, it cost more than a trillion dollars to broadband providers). This just says “we shouldn’t have to pay them for it now.” Fine, but who, then, and how do you know that forcing someone else to subsidize these startup companies will actually lead to better results? Mightn’t we get less broadband investment such that there is little Internet available for these companies to take advantage of in the first place? If broadband consumers instead of content consumers foot the bill, is that clearly preferable, either from a social welfare perspective, or even the self interest of these investors who, after all, do ultimately rely on consumer spending to earn their return?

Moreover, why is this “build for free, then learn how to monetize over time” business model necessarily better than any other? These startup investors know better than anyone that enshrining existing business models just because they exist is the antithesis of innovation and progress. But that’s exactly what they’re saying – “the successful companies of the past did it this way, so we should get a government guarantee to preserve our ability to do it, too!”

This is the most depressing implication of this letter. These investors and others like them have been responsible for financing enormously valuable innovations. If even they can’t see the hypocrisy of these claims for net neutrality – and worse, choose to propagate it further – then we really have come to a sad place. When innovators argue passionately for stagnation, we’re in trouble.

Instead, creators will have to ask permission of an investor or corporate hierarchy before they can launch. Ideas will be vetted by committees and quirky passion projects will not get a chance. An individual in dorm room or a design studio will not be able to experiment out loud on the Internet. The result will be greater conformity, fewer surprises, and less innovation.

This is just a little too much protest. Creators already have to ask “permission” – or are these investors just opening up their bank accounts to whomever wants their money? The ones that are able to do it on a shoestring, with money saved up from babysitting gigs, may find higher costs, and the need to do more babysitting. But again, there is nothing special about the Internet in this. Let’s mandate zero cost office space and office supplies and developer services and design services and . . . etc. for all – then we’ll have way more “permission-less” startups. If it’s just a handout they want, they should say so, instead of pretending there is a moral or economic welfare basis for their claims.

Further, investors like us will be wary of investing in anything that access providers might consider part of their future product plans for fear they will use the same technical infrastructure to advantage their own services or use network management as an excuse to disadvantage competitive offerings.

This is crazy. For the same reasons I mentioned above, the big access provider (and big incumbent competitor, for that matter) already has huge advantages. If these investors aren’t already wary of investing in anything that Google or Comcast or Apple or… might plan to compete with, they must be terrible at their jobs.

What’s more, Wheeler’s much-reviled proposal (what we know about it, that is), to say nothing of antitrust law, clearly contemplates exactly this sort of foreclosure and addresses it. “Pure” net neutrality doesn’t add much, if anything, to the limits those laws already do or would provide.

Policing this will be almost impossible (even using a standard of “commercial reasonableness”) and access providers do not need to successfully disadvantage their competition; they just need to create a credible threat so that investors like us will be less inclined to back those companies.

You think policing the world of non-neutrality is hard – try policing neutrality. It’s not as easy as proponents make it out to be. It’s simply never been the case that all bits at all times have been treated “neutrally” on the Internet. Any version of an Open Internet Order (just like the last one, for example) will have to recognize this.

Larry Downes compiled a list of the exceptions included in the last Open Internet Order when he testified before the House Judiciary Committee on the rules in 2011. There are 16 categories of exemption, covering a wide range of fundamental components of broadband connectivity, from CDNs to free Wi-Fi at Starbucks. His testimony is a tour de force, and should be required reading for everyone involved in this debate.

But think about how the manifest advantages of these non-neutral aspects of broadband networks would be squared with “real” neutrality. On their face, if these investors are to be taken at their word, these arguments would preclude all of the Open Internet Order’s exemptions, too. And if any sort of inequality is going to be deemed ok, how accurately would regulators distinguish between “illegitimate” inequality and the acceptable kind that lets coffee shops subsidize broadband? How does the simplistic logic of net equality distinguish between, say, Netflix’s colocated servers and a startup like Uber being integrated into Google Maps? The simple answer is that it doesn’t, and the claims and arguments of this letter are woefully inadequate to the task.

We need simple, strong, enforceable rules against discrimination and access fees, not merely against blocking.

No, we don’t. Or, at least, no one has made that case. These investors want a handout; that is the only case this letter makes.

We encourage the Commission to consider all available jurisdictional tools at its disposal in ensuring a free and open Internet that rewards, not disadvantages, investment and entrepreneurship.

… But not investment in broadband, and not entrepreneurship that breaks with the business models of the past. In reality, this letter is simple rent-seeking: “We want to invest in what we know, in what’s been done before, and we don’t want you to do anything to make that any more costly for us. If that entails impairing broadband investment or imposing costs on others, so be it – we’ll still make our outsized returns, and they can write their own letter complaining about ‘inequality.’”

A final point I have to make. Although the investors don’t come right out and say it, many others have, and it’s implicit in the investors’ letter: “Content providers shouldn’t have to pay for broadband. Users already pay for the service, so making content providers pay would just let ISPs double dip.” The claim is deeply problematic.

For starters, it’s another form of the status quo mentality: “Users have always paid and content hasn’t, so we object to any deviation from that.” But it needn’t be that way. And of course models frequently coexist where different parties pay for the same or similar services. Some periodicals are paid for by readers and offer little or no advertising; others charge a subscription and offer paid ads; and still others are offered for free, funded entirely by ads. All of these models work. None is “better” than the other. There is no reason the same isn’t true for broadband and content.

Net neutrality claims that the only proper price to charge on the content side of the market is zero. (Congratulations: You’re in the same club as that cutting-edge, innovative technology, the check, which is cleared at par by government fiat. A subsidy that no doubt explains why checks have managed to last this long). As an economic matter, that’s possible; it could be that zero is the right price. But it most certainly needn’t be, and issues revolving around Netflix’s traffic and the ability of ISPs and Netflix cost-effectively to handle it are evidence that zero may well not be the right price.

The reality is that these sorts of claims are devoid of economic logic — which is presumably why they, like the whole net neutrality “movement” generally, appeal so gratuitously to emotion rather than reason. But it doesn’t seem unreasonable to hope for more from a bunch of savvy financiers.

 

Today the D.C. Circuit struck down most of the FCC’s 2010 Open Internet Order, rejecting rules that required broadband providers to carry all traffic for edge providers (“anti-blocking”) and prevented providers from negotiating deals for prioritized carriage. However, the appeals court did conclude that the FCC has statutory authority to issue “Net Neutrality” rules under Section 706(a) and let stand the FCC’s requirement that broadband providers clearly disclose their network management practices.

The following statement may be attributed to Geoffrey Manne and Berin Szoka:

The FCC may have lost today’s battle, but it just won the war over regulating the Internet. By recognizing Section 706 as an independent grant of statutory authority, the court has given the FCC near limitless power to regulate not just broadband, but the Internet itself, as Judge Silberman recognized in his dissent.

The court left the door open for the FCC to write new Net Neutrality rules, provided the Commission doesn’t treat broadband providers as common carriers. This means that, even without reclassifying broadband as a Title II service, the FCC could require that any deals between broadband and content providers be reasonable and non-discriminatory, just as it has required wireless carriers to provide data roaming services to their competitors’ customers on that basis. In principle, this might be a sound approach, if the rule resembles antitrust standards. But even that limitation could easily be evaded if the FCC regulates through case-by-case enforcement actions, as it tried to do before issuing the Open Internet Order. Either way, the FCC need only make a colorable argument under Section 706 that its actions are designed to “encourage the deployment… of advanced telecommunications services.” If the FCC’s tenuous “triple cushion shot” argument could satisfy that test, there is little limit to the deference the FCC will receive.

But that’s just for Net Neutrality. Section 706 covers “advanced telecommunications,” which seems to include any information service, from broadband to the interconnectivity of smart appliances like washing machines and home thermostats. If the court’s ruling on Section 706 is really as broad as it sounds, and as the dissent fears, the FCC just acquired wide authority over these, as well — in short, the entire Internet, including the “Internet of Things.” While the court’s “no common carrier rules” limitation is a real one, the FCC clearly just gained enormous power that it didn’t have before today’s ruling.

Today’s decision essentially rewrites the Communications Act in a way that will, ironically, do the opposite of what the FCC claims: hurt, not help, deployment of new Internet services. Whatever the FCC’s role ought to be, such decisions should be up to our elected representatives, not three unelected FCC Commissioners. So if there’s a silver lining in any of this, it may be that the true implications of today’s decision are so radical that Congress finally writes a new Communications Act — a long-overdue process Congressmen Fred Upton and Greg Walden have recently begun.

Szoka and Manne are available for comment at media@techfreedom.org. Find/share this release on Facebook or Twitter.

At today’s Open Commission Meeting, the FCC is set to consider two apparently forthcoming Notices of Proposed Rulemaking that will shape the mobile broadband sector for years to come.  It’s not hyperbole to say that the FCC’s approach to the two issues at hand — the design of spectrum auctions and the definition of the FCC’s spectrum screen — can make or break wireless broadband in this country.  The FCC stands at a crossroads with respect to its role in this future, and it’s not clear that it will choose wisely.

Chairman Genachowski has recently jumped on the “psychology of abundance” bandwagon, suggesting that the firms that provide broadband service must (be forced by the FCC to) act as if spectrum and bandwidth were abundant (they aren’t), and not to engage in activities that are sensible responses to broadband scarcity.  According to Genachowski, “Anything that depresses broadband usage is something that we need to be really concerned about. . . . We should all be concerned with anything that is incompatible with the psychology of abundance.”  This is the idea — popularized by non-economists and ideologues like Susan Crawford — that we should require networks to act as if we have “abundant” capacity, and enact regulations and restraints that prevent network operators from responding to actual scarcity with business structures, rational pricing or usage rules that could in any way deviate from this imaginary Nirvana.

This is rhetorical bunk.  The culprit here, if there is one, isn’t the firms that plow billions into expanding scarce capacity to meet abundant demand and struggle to manage their networks to maximize capacity within these constraints (dubbed “investment heroes” by the more reasonable lefties at the Progressive Policy Institute).  Firms act like there is scarcity because there is — and the FCC is largely to blame.  What we should be concerned about is not the psychology of abundance, but rather the sources of actual scarcity.

The FCC faces a stark choice—starting with tomorrow’s meeting.  The Commission can choose to continue to be the agency that micromanages scarcity as an activist intervenor in the market — screening-out some market participants as “too big,” and scrutinizing every scarcity-induced merger, deal, spectrum transfer, usage cap, pricing decision and content restriction for how much it deviates from a fanciful ideal.  Or it can position itself as the creator of true abundance and simply open the spectrum spigot that it has negligently blocked for years, delivering more bandwidth into the hands of everyone who wants it.

If the FCC chooses the latter course — if it designs effective auctions that attract sellers, permitting participation by all willing buyers — everyone benefits.  Firms won’t act like there is scarcity if there is no scarcity.  Investment in networks and the technology that maximizes their capacity will continue as long as those investments are secure and firms are allowed to realize a return — not lambasted every time they try to do so.

If, instead, the Commission remains in thrall to self-proclaimed consumer advocates (in truth, regulatory activists) who believe against all evidence that they can and should design industry’s structure (“big is bad!”) and second-guess every business decision (“psychology of abundance!”), everyone loses (except the activists, I suppose).  Firms won’t stop acting like there’s scarcity until there is no scarcity.  And investment will take a backseat to unpopular network management decisions that represent the only sensible responses to uncertain, over-regulated market conditions.

Thomas Hazlett and I have posted The Law and Economics of Network Neutrality:

The Federal Communications Commission’s Network Neutrality Order regulates how broadband networks explain their services to customers, mandates that subscribers be permitted to deploy whatever computers, mobile devices, or applications they like for use with the network access service they purchase, imposes a prohibition upon unreasonable discrimination in network management such that Internet Service Provider efforts to maintain service quality (e.g. mitigation congestion) or to price and package their services do not burden rival applications.

This paper offers legal and economic critique of the new Network Neutrality policy and particularly the no blocking and no discrimination rules. While we argue the FCC‘s rules are likely to be declared beyond the scope of the agency‘s charter, we focus upon the economic impact of net neutrality regulations. It is beyond paradoxical that the FCC argues that it is imposing new regulations so as to preserve the Internet‘s current economic structure; that structure has developed in an unregulated environment where firms are free to experiment with business models – and vertical integration – at will. We demonstrate that Network Neutrality goes far further than existing law, categorically prohibiting various forms of economic integration in a manner equivalent to antitrust’s per se rule, properly reserved for conduct that is so likely to cause competitive harm that the marginal benefit of a fact-intensive analysis cannot be justified. Economic analysis demonstrates that Network Neutrality cannot be justified upon consumer welfare grounds. Further, the Commission‘s attempt to justify its new policy simply ignores compelling evidence that “open access” regulations have distorted broadband build-out in the United States, visibly reducing subscriber growth when imposed and visibly increasing subscriber growth when repealed. On the other, the FCC manages to cite just one study – not of the broadband market – to support its claims of widespread foreclosure threats. This empirical study, upon closer scrutiny than the Commission appears to have given it, actually shows no evidence of anti-competitive foreclosure. This fatal analytical flaw constitutes a smoking gun in the FCC‘s economic analysis of net neutrality.

Read the whole thing.  Under review at a law review near you …

Commentators who see Trinko as an impediment to the claim that antitrust law can take care of harmful platform access problems (and thus that prospective rate regulation (i.e., net neutrality) is not necessary), commit an important error in making their claim–and it is a similar error committed by those who advocate for search neutrality regulation, as well.  In both cases, proponents are advocating for a particular remedy to an undemonstrated problem, rather than attempting to assess whether there is really a problem in the first place.  In the net neutrality context, it may be true that Trinko would prevent the application of antitrust laws to mandate neutral access as envisioned by Free Press, et al.  But that is not the same as saying Trinko precludes the application of antitrust laws.  In fact, there is nothing in Trinko that would prevent regulators and courts from assessing the anticompetitive consequences of particular network management decisions undertaken by a dominant network provider.  This is where the concerns do and should lie–not with an aesthetic preference for a particular form of regulation putatively justified as a response to this concern.  Indeed, “net neutrality” as an antitrust remedy, to the extent that it emanates from essential facilities arguments, is and should be precluded by Trinko.

But the Court seems to me to be pretty clear in Trinko that an antitrust case can be made, even against a firm regulated under the Telecommunications Act:

Section 601(b)(1) of the 1996 Act is an antitrust-specific saving clause providing that “nothing in this Act or the amendments made by this Act shall be construed to modify, impair, or supersede the applicability of any of the antitrust laws.”  This bars a finding of implied immunity. As the FCC has put the point, the saving clause preserves those “claims that satisfy established antitrust standards.”

But just as the 1996 Act preserves claims that satisfy existing antitrust standards, it does not create new claims that go beyond existing antitrust standards; that would be equally inconsistent with the saving clause’s mandate that nothing in the Act “modify, impair, or supersede the applicability” of the antitrust laws.

There is no problem assessing run of the mill anticompetitive conduct using “established antitrust standards.”  But that doesn’t mean that a net neutrality remedy can be constructed from such a case, nor does it mean that precisely the same issues that proponents of net neutrality seek to resolve with net neutrality are necessarily cognizable anticompetitive concerns.

For example, as Josh noted the other day, quoting Tom Hazlett, proponents of net neutrality seem to think that it should apply indiscriminately against even firms with no monopoly power (and thus no ability to inflict consumer harm in the traditional antitrust sense).  Trinko (along with a vast quantity of other antitrust precedent) would prevent the application of antitrust laws to reach this conduct–and thus, indeed, antitrust and net neutrality as imagined by its proponents are not coextensive.  I think this is very much to the good.  But, again, nothing in Trinko or elsewhere in the antitrust laws would prohibit an antitrust case against a dominant firm engaged in anticompetitive conduct just because it was also regulated by the FCC.

Critics point to language like this in Trinko to support their contrary claim:

One factor of particular importance is the existence of a regulatory structure designed to deter and remedy anticompetitive harm. Where such a structure exists, the additional benefit to competition provided by antitrust enforcement will tend to be small, and it will be less plausible that the antitrust laws contemplate such additional scrutiny.

But I don’t think that helps them at all.  What the Court is saying is not that one regulatory scheme precludes the other, but rather that if a regulatory scheme mandates conduct that makes the actuality of anticompetitive harm less likely, then the application of necessarily-imperfect antitrust law is likely to do more harm than good.  Thus the Court notes that

The regulatory framework that exists in this case demonstrates how, in certain circumstances, “regulation significantly diminishes the likelihood of major antitrust harm.”

But this does not say that regulation precludes the application of antitrust law.  Nor does it preclude the possibility that antitrust harm can still exist; nor does it suggest that any given regulatory regime reduces the likelihood of any given anticompetitive harm–and if net neutrality proponents could show that the regulatory regime did not in fact diminish the likelihood of antitrust harm, nothing in Trinko would suggest that antitrust should not apply.

So let’s get out there and repeal that FCC net neutrality order and let antitrust deal with any problems that might arise.