Archives For telecommunications

Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development. 

Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.

There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move. 

What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets. 

The importance of profit-and-loss to economic calculation

One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace. 

This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs. 

The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.

For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.  

While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs. 

For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit. 

All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.

Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.

Municipal broadband as islands of chaos

Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.

The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.

This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes. 

Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.

But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.

Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate. 

Conclusion

There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.

As Thomas Sowell has noted many times, political debates often involve the use of words which if taken literally mean something very different than the connotations which are conveyed. Examples abound in the debate about broadband buildout. 

There is a general consensus on the need to subsidize aspects of broadband buildout to rural areas in order to close the digital divide. But this real need allows for strategic obfuscation of key terms in this debate by parties hoping to achieve political or competitive gain. 

“Access” and “high-speed broadband”

For instance, nearly everyone would agree that Internet policy should “promote access to high-speed broadband.” But how some academics and activists define “access” and “high-speed broadband” are much different than the average American would expect.

A commonsense definition of access is that consumers have the ability to buy broadband sufficient to meet their needs, considering the costs and benefits they face. In the context of the digital divide between rural and urban areas, the different options available to consumers in each area is a reflection of the very real costs and other challenges of providing service. In rural areas with low population density, it costs broadband providers considerably more per potential subscriber to build the infrastructure needed to provide service. At some point, depending on the technology, it is no longer profitable to build out to the next customer several miles down the road. The options and prices available to rural consumers reflects this unavoidable fact. Holding price constant, there is no doubt that many rural consumers would prefer higher speeds than are currently available to them. But this is not the real-world choice which presents itself. 

But access in this debate instead means the availability of the same broadband options regardless of where people live. Rather than being seen as a reflection of underlying economic realities, the fact that rural Americans do not have the same options available to them that urban Americans do is seen as a problem which calls out for a political solution. Thus, billions of dollars are spent in an attempt to “close the digital divide” by subsidizing broadband providers to build infrastructure to  rural areas. 

“High-speed broadband” similarly has a meaning in this debate significantly different from what many consumers, especially those lacking “high speed” service, expect. For consumers, fast enough is what allows them to use the Internet in the ways they desire. What is fast enough does change over time as more and more uses for the Internet become common. This is why the FCC has changed the technical definition of broadband multiple times over the years as usage patterns and bandwidth requirements change. Currently, the FCC uses 25Mbps down/3 Mbps up as the baseline for broadband.

However, for some, like Jonathan Sallet, this is thoroughly insufficient. In his Broadband for America’s Future: A Vision for the 2020s, he instead proposes “100 Mbps symmetrical service without usage limits.” The disconnect between consumer demand as measured in the marketplace in light of real trade-offs between cost and performance and this arbitrary number is not well-explained in this study. The assumption is simply that faster is better, and that the building of faster networks is a mere engineering issue once sufficiently funded and executed with enough political will.

But there is little evidence that consumers “need” faster Internet than the market is currently providing. In fact, one Wall Street Journal study suggests “typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.” Moreover, there is even less evidence that most consumers or businesses need anything close to upload speeds of 100 Mbps. For even intensive uses like high-resolution live streaming, recommended upload speeds still fall far short of 100 Mbps. 

“Competition” and “Overbuilding”

Similarly, no one objects to the importance of “competition in the broadband marketplace.” But what is meant by this term is subject to vastly different interpretations.

The number of competitors is not the same as the amount of competition. Competition is a process by which market participants discover the best way to serve consumers at lowest cost. Specific markets are often subject to competition not only from the firms which exist within those markets, but also from potential competitors who may enter the market any time potential profits reach a point high enough to justify the costs of entry. An important inference from this is that temporary monopolies, in the sense that one firm has a significant share of the market, is not in itself illegal under antitrust law, even if they are charging monopoly prices. Potential entry is as real in its effects as actual competitors in forcing incumbents to continue to innovate and provide value to consumers. 

However, many assume the best way to encourage competition in broadband buildout is to simply promote more competitors. A significant portion of Broadband for America’s Future emphasizes the importance of subsidizing new competition in order to increase buildout, increase quality, and bring down prices. In particular, Sallet emphasizes the benefits of municipal broadband, i.e. when local governments build and run their own networks. 

In fact, Sallet argues that fears of “overbuilding” are really just fears of competition by incumbent broadband ISPs:

Language here is important. There is a tendency to call the construction of new, competitive networks in a locality with an existing network “overbuilding”—as if it were an unnecessary thing, a useless piece of engineering. But what some call “overbuilding” should be called by a more familiar term: “Competition.” “Overbuilding” is an engineering concept; “competition” is an economic concept that helps consumers because it shifts the focus from counting broadband networks to counting the dollars that consumers save when they have competitive choices. The difference is fundamental—overbuilding asks whether the dollars spent to build another network are necessary for the delivery of a communications service; economics asks whether spending those dollars will lead to competition that allows consumers to spend less and get more. 

Sallet makes two rhetorical moves here to make his argument. 

The first is redefining “overbuilding,” which refers to literally building a new network on top of (that is, “over”) previously built architecture, as a ploy by ISPs to avoid competition. But this is truly Orwellian. When a new entrant can build over an incumbent and take advantage of the first-mover’s investments to enter at a lower cost, a failure to compensate the first-mover is free riding. If the government compels such free riding, it reduces incentives for firms to make the initial investment to build the infrastructure.

The second is defining competition as the number of competitors, even if those competitors need to be subsidized by the government in order to enter the marketplace.  

But there is no way to determine the “right” number of competitors in a given market in advance. In the real world, markets don’t match blackboard descriptions of perfect competition. In fact, there are sometimes high fixed costs which limit the number of firms which will likely exist in a competitive market. In some markets, known as natural monopolies, high infrastructural costs and other barriers to entry relative to the size of the market lead to a situation where it is cheaper for a monopoly to provide a good or service than multiple firms in a market. But it is important to note that only firms operating under market pressures can assess the viability of competition. This is why there is a significant risk in government subsidizing entry. 

Competition drives sustained investment in the capital-intensive architecture of broadband networks, which suggests that ISPs are not natural monopolies. If they were, then having a monopoly provider regulated by the government to ensure the public interest, or government-run broadband companies, may make sense. In fact, Sallet denies ISPs are natural monopolies, stating that “the history of telecommunications regulation in the United States suggests that monopolies were a result of policy choices, not mandated by any iron law of economics” and “it would be odd for public policy to treat the creation of a monopoly as a success.” 

As noted by economist George Ford in his study, The Impact of Government-Owned Broadband Networks on Private Investment and Consumer Welfare, unlike the threat of entry which often causes incumbents to act competitively even in the absence of competitors, the threat of subsidized entry reduces incentives for private entities to invest in those markets altogether. This includes both the incentive to build the network and update it. Subsidized entry may, in fact, tip the scales from competition that promotes consumer welfare to that which could harm it. If the market only profitably sustains one or two competitors, adding another through municipal broadband or subsidizing a new entrant may reduce the profitability of the incumbent(s) and eventually lead to exit. When this happens, only the government-run or subsidized network may survive because the subsidized entrant is shielded from the market test of profit-and-loss.

The “Donut Hole” Problem

The term “donut hole” is a final example to consider of how words can be used to confuse rather than enlighten in this debate.

There is broad agreement that to generate the positive externalities from universal service, there needs to be subsidies for buildout to high-cost rural areas. However, this seeming agreement masks vastly different approaches. 

For instance, some critics of the current subsidy approach have identified a phenomenon where the city center has multiple competitive ISPs and government policy extends subsidies to ISPs to build out broadband coverage into rural areas, but there is relatively paltry Internet services in between due to a lack of private or public investment. They describe this as a “donut hole” because the “unserved” rural areas receive subsidies while “underserved” outlying parts immediately surrounding town centers receive nothing under current policy.

Conceptually, this is not a donut hole. It is actually more like a target or bullseye, where the city center is served by private investment and the rural areas receive subsidies to be served. 

Indeed, there is a different use of the term donut hole, which describes how public investment in city centers can create a donut hole of funding needed to support rural build-out. Most Internet providers rely on profits from providing lower-cost service to higher-population areas (like city centers) to cross-subsidize the higher cost of providing service in outlying and rural areas. But municipal providers generally only provide municipal service — they only provide lower-cost service. This hits the carriers that serve higher-cost areas with a double whammy. First, every customer that municipal providers take from private carriers cuts the revenue that those carriers rely on to provide service elsewhere. Second, and even more problematic, because the municipal providers have lower costs (because they tend not to serve the higher-cost outlying areas), they can offer lower prices for service. This “competition” exerts downward pressure on the private firms’ prices, further reducing revenue across their entire in-town customer base. 

This version of the “donut hole,” in which the revenues that private firms rely on from the city center to support the costs of providing service to outlying areas has two simultaneous effects. First, it directly reduces the funding available to serve more rural areas. And, second, it increases the average cost of providing service across its network (because it is no longer recovering as much of its costs from the lower-cost city core), which increases the prices that need to be charged to rural users in order to justify offering service at all.

Conclusion

Overcoming the problem of the rural digital divide starts with understanding why it exists. It is simply more expensive to build networks in areas with low population density. If universal service is the goal, subsidies, whether explicit subsidies from government or implicit cross-subsidies by broadband companies, are necessary to build out to these areas. But obfuscations about increasing “access to high-speed broadband” by promoting “competition” shouldn’t control the debate.

Instead, there needs to be a nuanced understanding of how government-subsidized entry into the broadband marketplace can discourage private investment and grow the size of the “donut hole,” thereby leading to demand for even greater subsidies. Policymakers should avoid exacerbating the digital divide by prioritizing subsidized competition over market processes.

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Jane Bambauer, (Professor of Law, University of Arizona James E. Rogers College of Law]

The importance of testing and contact tracing to slow the spread of the novel coronavirus and resume normal life is now well established. The difference between the communities that do it and the ones that don’t is disturbingly grim (see, e.g., South Korea versus Italy). In a large population like the U.S., contact tracing and alerts will have to be done in an automated way with the help of mobile service providers’ geolocation data. The intensive use of data in South Korea has led many commenters to claim that the strategy that’s been so effective there cannot be replicated in western countries with strong privacy laws.

Descriptively, it’s probably true that privacy law and instincts in the US and EU will hinder virus surveillance.

The European Commission’s recent guidance on GDPR’s application to the COVID-19 crisis left a hurdle for member states. EU countries would have to introduce new legislation in order to use telecommunications data to do contact tracing, and the legislation would be reviewable by the European Court of Human Rights. No member states have done this, even though nearly all of them have instituted lock-down measures. 

Even Germany, which has announced the rollout of a cellphone tracking and alert app has decided to make the use of the app voluntary. This system will only be effective if enough people opt into it. (One study suggests the minimum participation rate would have to be “near universal,” so this does not bode well.)

And in the U.S., privacy advocacy groups like EPIC are already gearing up to challenge the collection of cellphone data by federal and state governments based on recent Fourth Amendment precedent finding that individuals have a reasonable expectation of privacy in cell phone location data.

And nearly every opinion piece I read from public health experts promoting contact tracing ends with some obligatory handwringing about the privacy and ethical implications. Research universities and units of government that are comfortable advocating for draconian measures of social distancing and isolation find it necessary to stall and consult their IRBs and privacy officers before pursuing options that involve data surveillance.

While ethicists and privacy scholars certainly have something to teach regulators during a pandemic, the Coronavirus has something to teach us in return. It has thrown harsh light on the drawbacks and absurdities of rigid individual control over personal data.

Objections to surveillance lose their moral and logical bearings when the alternatives are out-of-control disease or mass lockdowns. Compared to those, mass surveillance is the most liberty-preserving option. Thus, instead of reflexively trotting out privacy and ethics arguments, we should take the opportunity to understand the order of operations—to know which rights and liberties are more vital than privacy so that we know when and why expectations in privacy need to bend. All but the most privacy-sensitive would count health and the liberty to leave one’s house among the most basic human interests, so the COVID-19 lockdowns are testing some of the practices and assumptions that are baked into our privacy laws.

At the highest level of abstraction, the pandemic should remind us that privacy is, ultimately, an instrumental right. It is meant to achieve certain social goals in fairness, safety, and autonomy. It is not an end in itself.  

When privacy is cloaked in the language of fundamental human rights, its instrumental function is obscured. Like other liberties in movement and commerce, conceiving of privacy as something that is under each individual’s control is a useful rule-of-thumb when it doesn’t conflict too much with other people’s interests. But the COVID-19 crisis shows that there are circumstances under which privacy as an individual right frustrates the very values in fairness, autonomy, and physical security that it is supposed to support. Privacy authorities and experts at every level need to be as clear and blunt as the experts supporting mass lockdowns: the government can do this, it will have to rely on industry, and we will work through the fallout and secondary problems when people stop dying.

At a minimum epidemiologists and cellphone service providers should be able to rely on implied consent to data-sharing, just as the tort system allows doctors to presume consent for emergency surgery when a patient’s wishes cannot be observed in time. Geoffrey Manne suggested this in an earlier TOTM post about the allocation of information and medical resources:

But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Indeed, we should go further than this. There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy (e.g. New York’s requirement that doctors notify their patient’s partners about a positive HIV test if their patient fails to do so), this same logic applies at scale to the collection and analysis of data during a pandemic.

Another reason consent is inappropriate at this time is that it mars quantitative studies with selection bias. Medical reporting on the transmission and mortality of COVID-19 has had to rely much too heavily on data coming out of the Diamond Princess cruise ship because for a long time it was the only random sample—the only time that everybody was screened. 

The United States has done a particularly poor job tracking the spread of the virus because faced with a shortage of tests, the CDC compounded our problems by denying those tests to anybody that didn’t meet specific criteria (a set of symptoms and either recent travel or known exposure to a confirmed case.) These criteria all but guaranteed that our data would suggest coughs and fevers are necessary conditions for coronavirus, and it delayed our recognition of community spread. If we are able to do antibody testing in the near future to understand who has had the virus in the past, that data would be most useful over swaths of people who have not self-selected into a testing facility.

If consent is not an appropriate concept for privacy during a pandemic, might there be a defect in its theory even outside of crisis time? I have argued in the past that privacy should be understood as a collective interest in risk management, like negligence law, rather than a property-style right. The public health response to COVID-19 helps illustrate why this is so. The right to privacy is different from other liberties because it directly conflicts with another fundamental right: namely, the right to access information and knowledge. One person’s objection to contact tracing (or any other collection and distribution of data) necessarily conflicts with another’s interest in knowing who was in that person’s proximity during a critical period.

This puts privacy on very different footing from other rights, like the right to free movement. Generally, my right to travel in public space does not have to interfere with other people’s rights. It may interfere if, for example, I drive on the wrong side of the street, but the conflict is not inevitable. With a few restrictions and rules of coordination, there is ample opportunity for people to enjoy public spaces the way they want without forcing policymakers to decide between competing uses. Thus, when we suspend the right to free movement in unusual times like today, when one person’s movement in public space does cause significant detriment to others, we can have confidence that the liberty can be restored when the threat has subsided.

Privacy, by contrast, is inevitably at odds with a demonstrable desire by another person or firm to access information that they find valuable. Perhaps this is the reason that ethicists and regulators find it difficult to overcome privacy objections: when public health experts insist that privacy is conflicting with valuable information flows, a privacy advocate can say “yes, exactly.”

We can improve on the theoretical underpinnings of privacy law by embracing the fact that privacy is instrumental—a means (sometimes an effective one) to achieve other ends. If we are trying to achieve certain goals through its use—goals in equity, fairness, and autonomy—we should increase our effort to understand what types of uses of data implicate those outcomes. Fortunately, that work is already advancing at a fast clip in debates about socially responsible AI.The next step would be to assess whether individual control tends to support the good uses and reduce the bad uses. If our policies can ensure that machine learning applications are sufficiently “fair,” and if we can agree on what fairness entails, lawmakers can begin the fruitful and necessary work of shifting privacy law away from prohibitions on data collection and sharing and toward limits on its use in the areas where individual control is counter-productive.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.

Every 5 years, Congress has to reauthorize the sunsetting provisions of the Satellite Television Extension and Localism Act (STELA). And the deadline for renewing the law is quickly approaching (Dec. 31). While sunsetting is, in the abstract, seemingly a good thing to ensure rules don’t become outdated, there are an interlocking set of interest groups who, generally speaking, only support reauthorizing the law because they are locked in a regulatory stalemate. STELA no longer represents an optimal outcome for many if not most of the affected parties. The time is now for finally allowing STELA to sunset, and using this occasion to further reform the underlying regulatory morass it is built upon.

Since the creation of STELA in 1988, much has changed in the marketplace. At the time of the 1992 Cable Act (the first year data from the FCC’s Video Competition Reports is available), cable providers served 95% of multichannel video subscribers. Now, the power of cable has waned to the extent that 2 of the top 4 multichannel video programming distributors (MVPDs) are satellite providers, without even considering the explosion in competition from online video distributors like Netflix and Amazon Prime.

Given these developments, Congress should reconsider whether STELA is necessary at all, along with the whole complex regulatory structure undergirding it, and consider the relative simplicity with which copyright and antitrust law are capable of adequately facilitating the market for broadcast content negotiations. An approach building upon that contemplated in the bipartisan Modern Television Act of 2019 by Congressman Steve Scalise (R-LA) and Congresswoman Anna Eshoo (D-CA)—which would repeal the compulsory license/retransmission consent regime for both cable and satellite—would be a step in the right direction.

A brief history of STELA

STELA, originally known as the 1988 Satellite Home Viewer Act, was originally justified as necessary to promote satellite competition against incumbent cable networks and to give satellite companies stronger negotiating positions against network broadcasters. In particular, the goal was to give satellite providers the ability to transmit terrestrial network broadcasts to subscribers. To do this, this regulatory structure modified the Communications Act and the Copyright Act. 

With the 1988 Satellite Home Viewer Act, Congress created a compulsory license for satellite retransmissions under Section 119 of the Copyright Act. This compulsory license provision mandated, just as the Cable Act did for cable providers, that satellite would have the right to certain network broadcast content in exchange for a government-set price (despite the fact that local network affiliates don’t necessarily own the copyrights themselves). The retransmission consent provision requires satellite providers (and cable providers under the Cable Act) to negotiate with network broadcasters for the fee to be paid for the right to network broadcast content. 

Alternatively, broadcasters can opt to impose must-carry provisions on cable and satellite  in lieu of retransmission consent negotiations. These provisions require satellite and cable operators to carry many channels from network broadcasters in order to have access to their content. As ICLE President Geoffrey Manne explained to Congress previously:

The must-carry rules require that, for cable providers offering 12 or more channels in their basic tier, at least one-third of these be local broadcast retransmissions. The forced carriage of additional, less-favored local channels results in a “tax on capacity,” and at the margins causes a reduction in quality… In the end, must-carry rules effectively transfer significant programming decisions from cable providers to broadcast stations, to the detriment of consumers… Although the ability of local broadcasters to opt in to retransmission consent in lieu of must-carry permits negotiation between local broadcasters and cable providers over the price of retransmission, must-carry sets a floor on this price, ensuring that payment never flows from broadcasters to cable providers for carriage, even though for some content this is surely the efficient transaction.

The essential question about the reauthorization of STELA regards the following provisions: 

  1. an exemption from retransmission consent requirements for satellite operators for the carriage of distant network signals to “unserved households” while maintaining the compulsory license right for those signals (modification of the compulsory license/retransmission consent regime);
  2. the prohibition on exclusive retransmission consent contracts between MVPDs and network broadcasters (per se ban on a business model); and
  3. the requirement that television broadcast stations and MVPDs negotiate in good faith (nebulous negotiating standard reviewed by FCC).

This regulatory scheme was supposed to sunset after 5 years. Instead of actually sunsetting, Congress has consistently reauthorized STELA ( in 1994, 1999, 2004, 2010, and 2014).

Each time, satellite companies like DirecTV & Dish Network, as well as interest groups representing rural customers who depend heavily on satellite for cable television, strongly supported the renewal of the legislation. Over time, though, the reauthorization has led to amendments supported by major players from each side of the negotiating table and broad support for what is widely considered “must-pass” legislation. In other words, every affected industry found something they liked about the compromise legislation.

As it stands, the sunset provision of STELA has meant that it gives each side negotiating leverage during the next round of reauthorization talks, and often concessions are drawn. But rather than simplifying this regulatory morass, STELA reauthorization simply extends rules that have outlived their purpose.

Current marketplace competition undermines the necessity of STELA reauthorization

The marketplace is very different in 2019 than it was when STELA’s predecessors were adopted and reauthorized. No longer is it the case that cable dominates and that satellite and other providers need a leg up just to compete. Moreover, there are now services that didn’t even exist when the STELA framework was first developed. Competition is thriving.

Wikipedia:

RankServiceSubscribersProviderType
1.Xfinity21,986,000ComcastCable
2.DirecTV19,222,000AT&TSatellite
3.Spectrum16,606,000CharterCable
4.Dish9,905,000Dish NetworkSatellite
5.Verizon Fios TV4,451,000VerizonFiber-Optic
6.Cox Cable TV4,015,000Cox EnterprisesCable
7.U-Verse TV3,704,000AT&TFiber-Optic
8.Optimum/Suddenlink3,307,500Altice USACable
9.Sling TV*2,417,000Dish NetworkLive Streaming
10.Hulu with Live TV2,000,000Hulu(Disney, Comcast, AT&T)Live Streaming
11.DirecTV Now1,591,000AT&TLive Streaming
12.YouTube TV1,000,000Google(Alphabet)Live Streaming
13.Frontier FiOS838,000FrontierFiber-Optic
14.Mediacom776,000MediacomCable
15.PlayStation Vue500,000SonyLive Streaming
16.CableOne Cable TV326,423Cable OneCable
17.FuboTV250,000FuboTVLive Streaming

A 2018 accounting of the largest MVPDs by subscribers shows that satellite is 2 of the top 4, and that over-the-top services like Sling TV, Hulu with LiveTV, and YouTube TV are gaining significantly. And this does not even consider (non-live) streaming services such as Netflix (approximately 60 million US subscribers), Hulu (about 28 million US subscribers) and Amazon Prime Video (which has about 40 million users in the US). It is not clear from these numbers that satellite needs special rules in order to compete with cable, or that the complex regulatory regime underlying STELA is necessary anymore.

On the contrary, there seems to be a lot of reason to believe that content is king, and the market for the distribution of that content is thriving. Competition among platforms is intense, not only among MVPDs like Comcast, DirecTV, Charter, and Dish Network, but from streaming services like Netflix, Amazon Prime Video, Hulu, and HBONow. Distribution networks heavily invest in exclusive content to attract consumers. There is no reason to think that we need selective forbearance from the byzantine regulations in this space in order to promote satellite adoption when satellite companies are just as good as any at contracting for high-demand content (for instance DirecTV with NFL Sunday Ticket). 

A better way forward: Streamlined regulation in the form of copyright and antitrust

As Geoffrey Manne said in his Congressional testimony on STELA reauthorization back in 2013: 

behind all these special outdated regulations are laws of general application that govern the rest of the economy: antitrust and copyright. These are better, more resilient rules. They are simple rules for a complex world. They will stand up far better as video technology evolves–and they don’t need to be sunsetted.

Copyright law establishes clearly defined rights, thereby permitting efficient bargaining between content owners and distributors. But under the compulsory license system, the copyright holders’ right to a performance license is fundamentally abridged. Retransmission consent normally requires fees to be paid for the content that MVPDs have available to them. But STELA exempts certain network broadcasts (“distant signals” for “unserved households”) from retransmission consent requirements. This reduces incentives to develop content subject to STELA, which at the margin harms both content creators and viewers. It also gives satellite an unfair advantage vis-a-vis cable in those cases it does not need to pay ever-rising retransmission consent fees. Ironically, it also reduces the incentive for satellite providers (DirecTV, at least) to work to provide local content to some rural consumers. Congress should reform the law to allow copyright holders to have their full rights under the Copyright Act again. Congress should also repeal the compulsory license and must-carry provisions that work at cross-purposes and allow true marketplace negotiations.

The initial allocation of property rights guaranteed under copyright law would allow for MVPDs, including satellite providers, to negotiate with copyright holders for content, and thereby realize a more efficient set of content distribution outcomes than is otherwise possible. Under the compulsory license/retransmission consent regime underlying both STELA and the Cable Act, the outcomes at best approximate those that would occur through pure private ordering but in most cases lead to economically inefficient results because of the thumb on the scale in favor of the broadcasters. 

In a similar way, just as copyright law provides a superior set of bargaining conditions for content negotiation, antitrust law provides a superior mechanism for policing potentially problematic conduct between the firms involved. Under STELA, the FCC polices transactions with a “good faith” standard. In an important sense, this ambiguous regulatory discretion provides little information to prospective buyers and sellers of licenses as to what counts as “good faith” negotiations (aside from the specific practices listed).

By contrast, antitrust law, guided by the consumer welfare standard and decades of case law, is designed both to deter potential anticompetitive foreclosure and also to provide a clear standard for firms engaged in the marketplace. The effect of relying on antitrust law to police competitive harms is — as the name of the standard suggest — a net increase in the welfare of consumers, the ultimate beneficiaries of a well functioning market. 

For instance, consider a hypothetical dispute between a network broadcaster and a satellite provider. Under the FCC’s “good faith” oversight, bargaining disputes, which are increasingly resulting in blackouts, are reviewed for certain negotiating practices deemed to be unfair, 47 CFR § 76.65(b)(1), and by a more general “totality of the circumstances” standard, 47 CFR § 76.65(b)(2). This is both over- and under-inclusive as the negotiating practices listed in (b)(1) may have procompetitive benefits in certain circumstances, and the (b)(2) totality of the circumstances standard is vague and ill-defined. By comparison, antitrust claims would be adjudicated through a foreseeable process with reference to a consumer welfare standard illuminated by economic evidence and case law.

If a satellite provider alleges anticompetitive foreclosure by a refusal to license, its claims would be subject to analysis under the Sherman Act. In order to prove its case, it would need to show that the network broadcaster has power in a properly defined market and is using that market power to foreclose competition by leveraging its ownership over network content to the detriment of consumer welfare. A court would then analyze whether this refusal of a duty to deal is a violation of antitrust law under the Trinko and Aspen Skiing standards. Economic evidence would need to be introduced that supports the allegation. 

And, critically, in this process, the defendants would be entitled to raise evidence in their case — both evidence suggesting that there was no foreclosure, as well as evidence of procompetitive justifications for decisions that otherwise may be considered foreclosure. Ultimately, a court, bound by established, nondiscretionary standards would weigh the evidence and make a determination. It is, of course, possible, that a review for “good faith” conduct could reach the correct result, but there is simply not a similarly rigorous process available to consistently push it in that direction.

The above-mentioned Modern Television Act of 2019 does represent a step in the right direction, as it would repeal the compulsory license/retransmission consent regime applied to both cable and satellite operators. However, it is imperfect as it does leave must carry requirements in place for local content and retains the “good faith” negotiating standard to be enforced by the FCC. 

Expiration is better than the status quo even if fundamental reform is not possible

Some scholars who have written on this issue, and very much agree that fundamental reform is needed, nonetheless argue that STELA should be renewed if more fundamental reforms like those described above can’t be achieved. For instance, George Ford recently wrote that 

With limited days left in the legislative calendar before STELAR expires, there is insufficient time for a sensible solution to this complex issue. Senate Commerce Committee Chairman Roger Wicker (R-Miss.) has offered a “clean” STELAR reauthorization bill to maintain the status quo, which would provide Congress with some much-needed breathing room to begin tackling the gnarly issue of how broadcast signals can be both widely retransmitted and compensated. Congress and the Trump administration should welcome this opportunity.

However, even in a world without more fundamental reform, it is not clear that satellite needs distant signals in order to compete with cable. The number of “short markets”—i.e. those without access to all four local network broadcasts—implicated by the loss of distant signals is relatively few. Regardless of how bad the overall regulatory scheme needs to be updated, it makes no sense to continue to preserve STELA’s provisions that benefit satellite when it is no longer necessary on competition grounds.

Conclusion

Congress should not only let STELA sunset, but it should consider reforming the entire compulsory license/retransmission consent regime as the Modern Television Act of 2019 aims to do. In fact, reformers should look to go even further in repealing must-carry provisions and the good faith negotiating standard enforced by the FCC. Copyright and antitrust law are much better rules for this constantly evolving space than the current sector-specific rules. 

For previous work from ICLE on STELA see The Future of Video Marketplace Regulation (written testimony of ICLE President Geoffrey Manne from June 12, 2013) and Joint Comments of ICLE and TechFreedom, In the Matter of STELA Reauthorization and Video Programming Reform (March 19, 2014). 

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

On March 19-20, 2020, the University of Nebraska College of Law will be hosting its third annual roundtable on closing the digital divide. UNL is expanding its program this year to include a one-day roundtable that focuses on the work of academics and researchers who are conducting empirical studies of the rural digital divide. 

Academics and researchers interested in having their work featured in this event are now invited to submit pieces for consideration. Submissions should be submitted by November 18th, 2019 using this form. The authors of papers and projects selected for inclusion will be notified by December 9, 2019. Research honoraria of up to $5,000 may be awarded for selected projects.

Example topics include cost studies of rural wireless deployments, comparative studies of the effects of ACAM funding, event studies of legislative interventions such as allowing customers unserved by carriers in their home exchange to request service from carriers in adjoining exchanges, comparative studies of the effectiveness of various federal and state funding mechanisms, and cost studies of different sorts of municipal deployments. This list is far from exhaustive.

Any questions about this event or the request for projects can be directed to Gus Hurwitz at ghurwitz@unl.edu or Elsbeth Magilton at elsbeth@unl.edu.

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.

The Department of Justice announced it has approved the $26 billion T-Mobile/Sprint merger. Once completed, the deal will create a mobile carrier with around 136 million customers in the U.S., putting it just behind Verizon (158 million) and AT&T (156 million).

While all the relevant federal government agencies have now approved the merger, it still faces a legal challenge from state attorneys general. At the very least, this challenge is likely to delay the merger; if successful, it could scupper it. In this blog post, we evaluate the state AG’s claims (and find them wanting).

Four firms good, three firms bad?

The state AG’s opposition to the T-Mobile/Sprint merger is based on a claim that a competitive mobile market requires four national providers, as articulated in their redacted complaint:

The Big Four MNOs [mobile network operators] compete on many dimensions, including price, network quality, network coverage, and features. The aggressive competition between them has resulted in falling prices and improved quality. The competition that currently takes place across those dimensions, and others, among the Big Four MNOs would be negatively impacted if the Merger were consummated. The effects of the harm to competition on consumers will be significant because the Big Four MNOs have wireless service revenues of more than $160 billion.

. . . 

Market consolidation from four to three MNOs would also serve to increase the possibility of tacit collusion in the markets for retail mobile wireless telecommunications services.

But there are no economic grounds for the assertion that a four firm industry is on a competitive tipping point. Four is an arbitrary number, offered up in order to squelch any further concentration in the industry.

A proper assessment of this transaction—as well as any other telecom merger—requires accounting for the specific characteristics of the markets affected by the merger. The accounting would include, most importantly, the dynamic, fast-moving nature of competition and the key role played by high fixed costs of production and economies of scale. This is especially important given the expectation that the merger will facilitate the launch of a competitive, national 5G network.

Opponents claim this merger takes us from four to three national carriers. But Sprint was never a serious participant in the launch of 5G. Thus, in terms of future investment in general, and the roll-out of 5G in particular, a better characterization is that it this deal takes the U.S. from two to three national carriers investing to build out next-generation networks.

In the past, the capital expenditures made by AT&T and Verizon have dwarfed those of T-Mobile and Sprint. But a combined T-Mobile/Sprint would be in a far better position to make the kinds of large-scale investments necessary to develop a nationwide 5G network. As a result, it is likely that both the urban-rural digital divide and the rich-poor digital divide will decline following the merger. And this investment will drive competition with AT&T and Verizon, leading to innovation, improving service and–over time–lowering the cost of access.

Is prepaid a separate market?

The state AGs complain that the merger would disproportionately affect consumers of prepaid plans, which they claim constitutes a separate product market:

There are differences between prepaid and postpaid service, the most notable being that individuals who cannot pass a credit check and/or who do not have a history of bill payment with a MNO may not be eligible for postpaid service. Accordingly, it is informative to look at prepaid mobile wireless telecommunications services as a separate segment of the market for mobile wireless telecommunications services.

Claims that prepaid services constitute a separate market are questionable, at best. While at one time there might have been a fairly distinct divide between pre and postpaid markets, today the line between them is at least blurry, and may not even be a meaningful divide at all.

To begin with, the arguments regarding any expected monopolization in the prepaid market appear to assume that the postpaid market imposes no competitive constraint on the prepaid market. 

But that can’t literally be true. At the very least, postpaid plans put a ceiling on prepaid prices for many prepaid users. To be sure, there are some prepaid consumers who don’t have the credit history required to participate in the postpaid market at all. But these are inframarginal consumers, and they will benefit from the extent of competition at the margins unless operators can effectively price discriminate in ways they have not in the past, and which has not been demonstrated is possible or likely.

One source of this competition will come from Dish, which has been a vocal critic of the T-Mobile/Sprint merger. Under the deal with DOJ, T-Mobile and Sprint must spin-off Sprint’s prepaid businesses to Dish. The divested products include Boost Mobile, Virgin Mobile, and Sprint prepaid. Moreover the deal requires Dish be allowed to use T-Mobile’s network during a seven-year transition period. 

Will the merger harm low-income consumers?

While the states’ complaint alleges that low-income consumers will suffer, it pays little attention to the so-called “digital divide” separating urban and rural consumers. This seems curious given the attention it was given in submissions to the federal agencies. For example, the Communication Workers of America opined:

the data in the Applicants’ Public Interest Statement demonstrates that even six years after a T-Mobile/Sprint merger, “most of New T-Mobile’s rural customers would be forced to settle for a service that has significantly lower performance than the urban and suburban parts of the network.” The “digital divide” is likely to worsen, not improve, post-merger.

This is merely an assertion, and a misleading assertion. To the extent the “digital divide” would grow following the merger, it would be because urban access will improve more rapidly than rural access would improve. 

Indeed, there is no real suggestion that the merger will impede rural access relative to a world in which T-Mobile and Sprint do not merge. 

And yet, in the absence of a merger, Sprint would be less able to utilize its own spectrum in rural areas than would the merged T-Mobile/Sprint, because utilization of that spectrum would require substantial investment in new infrastructure and additional, different spectrum. And much of that infrastructure and spectrum is already owned by T-Mobile. 

It likely that the combined T-Mobile/Sprint will make that investment, given the cost savings that are expected to be realized through the merger. So, while it might be true that urban customers will benefit more from the merger, rural customers will also benefit. It is impossible to know, of course, by exactly how much each group will benefit. But, prima facie, the prospect of improvement in rural access seems a strong argument in favor of the merger from a public interest standpoint.

The merger is also likely to reduce another digital divide: that between wealthier and poorer consumers in more urban areas. The proportion of U.S. households with access to the Internet has for several years been rising faster among those with lower incomes than those with higher incomes, thereby narrowing this divide. Since 2011, access by households earning $25,000 or less has risen from 52% to 62%, while access among the U.S. population as a whole has risen only from 72% to 78%. In part, this has likely resulted from increased mobile access (a greater proportion of Americans now access the Internet from mobile devices than from laptops), which in turn is the result of widely available, low-cost smartphones and the declining cost of mobile data.

Concluding remarks

By enabling the creation of a true, third national mobile (phone and data) network, the merger will almost certainly drive competition and innovation that will lead to better services at lower prices, thereby expanding access for all and, if current trends hold, especially those on lower incomes. Beyond its effect on the “digital divide” per se, the merger is likely to have broadly positive effects on access more generally.

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here