Archives For Broadband

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.

Will the merger between T-Mobile and Sprint make consumers better or worse off? A central question in the review of this merger—as it is in all merger reviews—is the likely effects that the transaction will have on consumers. In this post, we look at one study that opponents of the merger have been using to support their claim that the merger will harm consumers.

Along with my earlier posts on data problems and public policy (1, 2, 3, 4, 5), this provides an opportunity to explore why seemingly compelling studies can be used to muddy the discussion and fool observers into seeing something that isn’t there.

This merger—between the third and fourth largest mobile wireless providers in the United States—has been characterized as a “4-to-3” merger, on the grounds that it will reduce the number of large, ostensibly national carriers from four to three. This, in turn, has led to concerns that further concentration in the wireless telecommunications industry will harm consumers. Specifically, some opponents of the merger claim that “it’s going to be hard for someone to make a persuasive case that reducing four firms to three is actually going to improve competition for the benefit of American consumers.”

A number of previous mergers around the world can or have also been characterized as 4-to-3 mergers in the wireless telecommunications industry. Several econometric studies have attempted to evaluate the welfare effects of 4-to-3 mergers in other countries, as well as the effects of market concentration in the wireless industry more generally. These studies have been used by both proponents and opponents of the proposed merger of T-Mobile and Sprint to support their respective contentions that the merger will benefit or harm consumer welfare.

One particular study has risen to prominence among opponents of 4-to-3 mergers in telecom in general and the T-Mobile/Sprint merger in specific. This is worrying because the study has several fundamental flaws. 

This study, by Finnish consultancy Rewheel, has been cited by, among others, Phillip Berenbroick of Public Knowledge, who in Senate testimony, asserted that “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.”

The Rewheel report upon which Mr. Berenbroick relied, is, however, marred by a number of significant flaws, which undermine its usefulness.

The Rewheel report

Rewheel’s report purports to analyze the state of 4G pricing across 41 countries that are either members of the EU or the OECD or both. The report’s conclusions are based mainly on two measures:

  1. Estimates of the maximum number of gigabytes available under each plan for a specific hypothetical monthly price, ranging from €5 to €80 a month. In other words, for each plan, Rewheel asks, “How many 4G gigabytes would X euros buy?” Rewheel then ranks countries by the median amount of gigabytes available at each hypothetical price for all the plans surveyed in each country.
  2. Estimates of what Rewheel describes as “fully allocated gigabyte prices.” This is the monthly retail price (including VAT) divided by the number of gigabytes included in each plan. Rewheel then ranks countries by the median price per gigabyte across all the plans surveyed in each country.

Rewheel’s convoluted calculations

Rewheel’s use of the country median across all plans is problematic. In particular it gives all plans equal weight, regardless of consumers’ use of each plan. For example, a plan targeted for a consumer with a “high” level of usage is included with a plan targeted for a consumer with a “low” level of usage. Even though a “high” user would not purchase a “low” plan (which would be relatively expensive for a “high” user), all plans are included, thereby skewing upward the median estimates.

But even if that approach made sense as a way of measuring consumers’ willingness to pay, in execution Rewheel’s analysis contains the following key defects:

  • The Rewheel report is essentially limited to quantity effects alone (i.e., how many gigabytes available under each plan for a given hypothetical price) or price effects alone (i.e., price per included gigabyte for each plan). These measures can mislead the analysis by missing, among other things, innovation and quality effects.
  • Rewheel’s analysis is not based on an impartial assessment of relevant price data. Rather, it is based on hypothetical measures. Such comparisons say nothing about the plans actually chosen by consumers or the actual prices paid by consumers in those countries, rendering Rewheel’s comparisons virtually meaningless. As Affeldt & Nitsche (2014) note in their assessment of the effects of concentration in mobile telecom markets:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr (when tracking prices over time, see rtr (2014)). Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

  • The Rewheel report bases its comparison on dissimilar service levels by not taking into account, for instance, relevant features like comparable network capacity, service security, and, perhaps most important, overall quality of service.

Rewheel’s unsupported conclusions

Rewheel uses its analysis to come to some strong conclusions, such as the conclusion on the first page of its report declaring the median gigabyte price in countries with three carriers is twice as high as in countries with four carriers.

The figure below is a revised version of the figure on the first page of Rewheel’s report. The yellow blocks (gray dots) show the range of prices in countries with three carriers the blue blocks (pink dots) shows the range of prices in countries with four carriers. The darker blocks show the overlap of the two. The figure makes clear that there is substantial overlap in pricing among three and four carrier countries. Thus, it is not obvious that three carrier countries have significantly higher prices (as measured by Rewheel) than four carrier countries.

Rewheel

A simple “eyeballing” of the data can lead to incorrect conclusions, in which case statistical analysis can provide some more certainty (or, at least, some measure of uncertainty). Yet, Rewheel provides no statistical analysis of its calculations, such as measures of statistical significance. However, information on page 5 of the Rewheel report can be used to perform some rudimentary statistical analysis.

I took the information from the columns for hypothetical monthly prices of €30 a month and €50 a month, and converted data into a price per gigabyte to generate the dependent variable. Following Rewheel’s assumption, “unlimited” is converted to 250 gigabytes per month. Greece was dropped from the analysis because Rewheel indicates that no data is available at either hypothetical price level.

My rudimentary statistical analysis includes the following independent variables:

  • Number of carriers (or mobile network operators, MNOs) reported by Rewheel in each country, ranging from three to five. Israel is the only country with five MNOs.
  • A dummy variable for EU28 countries. Rewheel performs separate analysis for EU28 countries, suggesting they think this is an important distinction.
  • GDP per capita for each country, adjusted for purchasing power parity. Several articles in the literature suggest higher GDP countries would be expected to have higher wireless prices.
  • Population density, measured by persons per square kilometer. Several articles in the literature argue that countries with lower population density would have higher costs of providing wireless service which would, in turn, be reflected in higher prices.

The tables below confirm what an eyeballing of the figure suggest: Rewheel’s data show number of MNOs in a country have no statistically significant relationship with price per gigabyte, at either the €30 a month level or the €50 a month level.

RewheelRegression

While the signs on the MNO coefficient are negative (i.e., more carriers in a country is associated with lower prices), they are not statistically significantly different from zero at any of the traditional levels of statistical significance.

Also, the regressions suffer from relatively low measures of goodness-of-fit. The independent variables in the regression explain approximately five percent of the variation in the price per gigabyte. This is likely because of the cockamamie way Rewheel measures price, but is also due to the known problems with performing cross-sectional analysis of wireless pricing, as noted by Csorba & Pápai (2015):

Many regulatory policies are based on a comparison of prices between European countries, but these simple cross-sectional analyses can lead to misleading conclusions because of at least two reasons. First, the price difference between countries of n and (n + 1) active mobile operators can be due to other factors, and the analyst can never be sure of having solved the omitted variable bias problem. Second and more importantly, the effect of an additional operator estimated from a cross-sectional comparison cannot be equated with the effect of an actual entry that might have a long-lasting effect on a single market.

The Rewheel report cannot be relied upon in assessing consumer benefits or harm associated with the T-Mobile/Sprint merger, or any other merger

Rewheel apparently has a rich dataset of wireless pricing plans. Nevertheless, the analyses presented in its report are fundamentally flawed. Moreover, Rewheel’s conclusions regarding three vs. four carrier countries are not only baseless, but clearly unsupported by closer inspection of the information presented in its report. The Rewheel report cannot be relied upon to inform regulatory oversight of the T-Mobile/Spring merger or any other. This study isn’t unique and it should serve as a caution to be wary of studies that merely eyeball information.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.

This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”

Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.

Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.

There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.

It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.

The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?

The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.

Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.

When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.

Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.

To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.

And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.

None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.

And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.

We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.

FCC Commissioner Rosenworcel penned an article this week on the doublespeak coming out of the current administration with respect to trade and telecom policy. On one hand, she argues, the administration has proclaimed 5G to be an essential part of our future commercial and defense interests. But, she tells us, the administration has, on the other hand, imposed tariffs on Chinese products that are important for the development of 5G infrastructure, thereby raising the costs of roll-out. This is a sound critique: regardless where one stands on the reasonableness of tariffs, they unquestionably raise the prices of goods on which they are placed, and raising the price of inputs to the 5G ecosystem can only slow down the pace at which 5G technology is deployed.

Unfortunately, Commissioner Rosenworcel’s fervor for advocating the need to reduce the costs of 5G deployment seems animated by the courageous act of a Democratic commissioner decrying the policies of a Republican President and is limited to a context where her voice lacks any power to actually affect policy. Even as she decries trade barriers that would incrementally increase the costs of imported communications hardware, she staunchly opposes FCC proposals that would dramatically reduce the cost of deploying next generation networks.

Given the opportunity to reduce the costs of 5G deployment by a factor far more significant than that by which tariffs will increase them, her preferred role as Democratic commissioner is that of resistance fighter. She acknowledges that “we will need 800,000 of these small cells to stay competitive in 5G” — a number significantly above the “the roughly 280,000 traditional cell towers needed to blanket the nation with 4G”.  Yet, when she has had the opportunity to join the Commission on speeding deployment, she has instead dissented. Party over policy.

In this year’s “Historical Preservation” Order, for example, the Commission voted to expedite deployment on non-Tribal lands, and to exempt small cell deployments from certain onerous review processes under both the National Historic Preservation Act and the National Environmental Policy Act of 1969. Commissioner Rosenworcel dissented from the Order, claiming that that the FCC has “long-standing duties to consult with Tribes before implementing any regulation or policy that will significantly or uniquely affect Tribal governments, their land, or their resources.” Never mind that the FCC engaged in extensive consultation with Tribal governments prior to enacting this Order.

Indeed, in adopting the Order, the Commission found that the Order did nothing to disturb deployment on Tribal lands at all, and affected only the ability of Tribal authorities to reach beyond their borders to require fees and lengthy reviews for small cells on lands in which Tribes could claim merely an “interest.”

According to the Order, the average number of Tribal authorities seeking to review wireless deployments in a given geographic area nearly doubled between 2008 and 2017. During the same period, commenters consistently noted that the fees charged by Tribal authorities for review of deployments increased dramatically.

One environmental consultant noted that fees for projects that he was involved with increased from an average of $2,000.00 in 2011 to $11,450.00 in 2017. Verizon’s fees are $2,500.00 per small cell site just for Tribal review. Of the 8,100 requests that Verizon submitted for tribal review between 2012 and 2015, just 29 ( 0.3%) resulted in a finding that there would be an adverse effect on tribal historic properties. That means that Verizon paid over $20 million to Tribal authorities over that period for historic reviews that resulted in statistically nil action. Along the same lines, Sprint’s fees are so high that it estimates that “it could construct 13,408 new sites for what 10,000 sites currently cost.”

In other words, Tribal review practices — of deployments not on Tribal land — impose a substantial tariff upon 5G deployment, increasing its cost and slowing its pace.

There is a similar story in the Commission’s adoption of, and Commissioner Rosenworcel’s partial dissent from, the recent Wireless Infrastructure Order.  Although Commissioner Rosenworcel offered many helpful suggestions (for instance, endorsing the OTARD proposal that Brent Skorup has championed) and nodded to the power of the market to solve many problems, she also dissented on central parts of the Order. Her dissent shows an unfortunate concern for provincial, political interests and places those interests above the Commission’s mission of ensuring timely deployment of advanced wireless communication capabilities to all Americans.

Commissioner Rosenworcel’s concern about the Wireless Infrastructure Order is that it would prevent state and local governments from imposing fees sufficient to recover costs incurred by the government to support wireless deployments by private enterprise, or from imposing aesthetic requirements on those deployments. Stated this way, her objections seem almost reasonable: surely local government should be able to recover the costs they incur in facilitating private enterprise; and surely local government has an interest in ensuring that private actors respect the aesthetic interests of the communities in which they build infrastructure.

The problem for Commissioner Rosenworcel is that the Order explicitly takes these concerns into account:

[W]e provide guidance on whether and in what circumstances aesthetic requirements violate the Act. This will help localities develop and implement lawful rules, enable providers to comply with these requirements, and facilitate the resolution of disputes. We conclude that aesthetics requirements are not preempted if they are (1) reasonable, (2) no more burdensome than those applied to other types of infrastructure deployments, and (3) objective and published in advance

It neither prohibits localities from recovering costs nor imposing aesthetic requirements. Rather, it requires merely that those costs and requirements be reasonable. The purpose of the Order isn’t to restrict localities from engaging in reasonable conduct; it is to prohibit them from engaging in unreasonable, costly conduct, while providing guidance as to what cost recovery and aesthetic considerations are reasonable (and therefore permissible).

The reality is that localities have a long history of using cost recovery — and especially “soft” or subjective requirements such as aesthetics — to extract significant rents from communications providers. In the 1980s this slowed the deployment and increased the costs of cable television. In the 2000s this slowed the deployment and increase the cost of of fiber-based Internet service. Today this is slowing the deployment and increasing the costs of advanced wireless services. And like any tax — or tariff — the cost is ultimately borne by consumers.

Although we are broadly sympathetic to arguments about local control (and other 10th Amendment-related concerns), the FCC’s goal in the Wireless Infrastructure Order was not to trample upon the autonomy of small municipalities; it was to implement a reasonably predictable permitting process that would facilitate 5G deployment. Those affected would not be the small, local towns attempting to maintain a desirable aesthetic for their downtowns, but large and politically powerful cities like New York City, where the fees per small cell site can be more than $5,000.00 per installation. Such extortionate fees are effectively a tax on smartphone users and others who will utilize 5G for communications. According to the Order, it is estimated that capping these fees would stimulate over $2.4 billion in additional infrastructure buildout, with widespread benefits to consumers and the economy.

Meanwhile, Commissioner Rosenworcel cries “overreach!” “I do not believe the law permits Washington to run roughshod over state and local authority like this,” she said. Her federalist bent is welcome — or it would be, if it weren’t in such stark contrast to her anti-federalist preference for preempting states from establishing rules governing their own internal political institutions when it suits her preferred political objective. We are referring, of course, to Rosenworcel’s support for the previous administration’s FCC’s decision to preempt state laws prohibiting the extension of municipal governments’ broadband systems. The order doing so was plainly illegal from the moment it was passed, as every court that has looked at it has held. That she was ok with. But imposing reasonable federal limits on states’ and localities’ ability to extract political rents by abusing their franchising process is apparently beyond the pale.

Commissioner Rosenworcel is right that the FCC should try to promote market solutions like Brent’s OTARD proposal. And she is also correct in opposing dangerous and destructive tariffs that will increase the cost of telecommunications equipment. Unfortunately, she gets it dead wrong when she supports a stifling regulatory status quo that will surely make it unduly difficult and expensive to deploy next generation networks — not least for those most in need of them. As Chairman Pai noted in his Statement on the Order: “When you raise the cost of deploying wireless infrastructure, it is those who live in areas where the investment case is the most marginal — rural areas or lower-income urban areas — who are most at risk of losing out.”

Reconciling those two positions entails nothing more than pointing to the time-honored Washington tradition of Politics Over Policy. The point is not (entirely) to call out Commissioner Rosenworcel; she’s far from the only person in Washington to make this kind of crass political calculation. In fact, she’s far from the only FCC Commissioner ever to have done so.

One need look no further than the previous FCC Chairman, Tom Wheeler, to see the hypocritical politics of telecommunications policy in action. (And one need look no further than Tom Hazlett’s masterful book, The Political Spectrum: The Tumultuous Liberation of Wireless Technology, from Herbert Hoover to the Smartphone to find a catalogue of its long, sordid history).

Indeed, Larry Downes has characterized Wheeler’s reign at the FCC (following a lengthy recounting of all its misadventures) as having left the agency “more partisan than ever”:

The lesson of the spectrum auctions—one right, one wrong, one hanging in the balance—is the lesson writ large for Tom Wheeler’s tenure at the helm of the FCC. While repeating, with decreasing credibility, that his lodestone as Chairman was simply to encourage “competition, competition, completion” and let market forces do the agency’s work for it, the reality, as these examples demonstrate, has been something quite different.

The Wheeler FCC has instead been driven by a dangerous combination of traditional rent-seeking behavior by favored industry clients, potent pressure from radical advocacy groups and their friends in the White House, and a sincere if misguided desire by Wheeler to father the next generation of network technologies, which quickly mutated from sound policy to empty populism even as technology continued on its own unpredictable path.

* * *

And the Chairman’s increasingly autocratic management style has left the agency more political and more partisan than ever, quick to abandon policies based on sound legal, economic and engineering principles in favor of bait-and-switch proceedings almost certain to do more harm than good, if only unintentionally.

The great irony is that, while Commissioner Rosenworcel’s complaints are backed by a legitimate concern that the Commission has waited far too long to take action on spectrum issues, the criticism should properly fall not upon the current Chair, but — you guessed it — his predecessor, Chairman Wheeler (and his predecessor, Julius Genachowski). Of course, in true partisan fashion, Rosenworcel was fawning in her praise for her political ally’s spectrum agenda, lauding it on more than one occasion as going “to infinity and beyond!”

Meanwhile, Rosenworcel has taken virtually every opportunity to chide and castigate Chairman Pai’s efforts to get more spectrum into the marketplace, most often criticizing them as too little, too slow, and too late. Yet from any objective perspective, the current FCC has been addressing spectrum issues at a breakneck pace, as fast, or faster than any prior Commission. As with spectrum, there is an upper limit to the speed at which federal bureaucracy can work, and Chairman Pai has kept the Commission pushed right up against that limit.

It’s a shame Commissioner Rosenworcel prefers to blame Chairman Pai for the problems she had a hand in creating, and President Trump for problems she has no ability to correct. It’s even more a shame that, having an opportunity to address the problems she so often decries — by working to get more spectrum deployed and put into service more quickly and at lower cost to industry and consumers alike — she prefers to dutifully wear the hat of resistance, instead.

But that’s just politics, we suppose. And like any tariff, it makes us all poorer.

As has been rumored in the press for a few weeks, today Comcast announced it is considering making a renewed bid for a large chunk of Twenty-First Century Fox’s (Fox) assets. Fox is in the process of a significant reorganization, entailing primarily the sale of its international and non-television assets. Fox itself will continue, but with a focus on its US television business.

In December of last year, Fox agreed to sell these assets to Disney, in the process rejecting a bid from Comcast. Comcast’s initial bid was some 16% higher than Disney’s, although there were other differences in the proposed deals, as well.

In April of this year, Disney and Fox filed a proxy statement with the SEC explaining the basis for the board’s decision, including predominantly the assertion that the Comcast bid (NB: Comcast is identified as “Party B” in that document) presented greater regulatory (antitrust) risk.

As noted, today Comcast announced it is in “advanced stages” of preparing another unsolicited bid. This time,

Any offer for Fox would be all-cash and at a premium to the value of the current all-share offer from Disney. The structure and terms of any offer by Comcast, including with respect to both the spin-off of “New Fox” and the regulatory risk provisions and the related termination fee, would be at least as favorable to Fox shareholders as the Disney offer.

Because, as we now know (since the April proxy filing), Fox’s board rejected Comcast’s earlier offer largely on the basis of the board’s assessment of the antitrust risk it presented, and because that risk assessment (and the difference between an all-cash and all-share offer) would now be the primary distinguishing feature between Comcast’s and Disney’s bids, it is worth evaluating that conclusion as Fox and its shareholders consider Comcast’s new bid.

In short: There is no basis for ascribing a greater antitrust risk to Comcast’s purchase of Fox’s assets than to Disney’s.

Summary of the Proposed Deal

Post-merger, Fox will continue to own Fox News Channel, Fox Business Network, Fox Broadcasting Company, Fox Sports, Fox Television Stations Group, and sports cable networks FS1, FS2, Fox Deportes, and Big Ten Network.

The deal would transfer to Comcast (or Disney) the following:

  • Primarily, international assets, including Fox International (cable channels in Latin America, the EU, and Asia), Star India (the largest cable and broadcast network in India), and Fox’s 39% interest in Sky (Europe’s largest pay TV service).
  • Fox’s film properties, including 20th Century Fox, Fox Searchlight, and Fox Animation. These would bring along with them studios in Sydney and Los Angeles, but would not include the Fox Los Angeles backlot. Like the rest of the US film industry, the majority of Fox’s film revenue is earned overseas.
  • FX cable channels, National Geographic cable channels (of which Fox currently owns 75%), and twenty-two regional sports networks (RSNs). In terms of relative demand for the two cable networks, FX is a popular basic cable channel, but fairly far down the list of most-watched channels, while National Geographic doesn’t even crack the top 50. Among the RSNs, only one geographic overlap exists with Comcast’s current RSNs, and most of the Fox RSNs (at least 14 of the 22) are not in areas where Comcast has a substantial service presence.
  • The deal would also entail a shift in the companies’ ownership interests in Hulu. Hulu is currently owned in equal 30% shares by Disney, Comcast, and Fox, with the remaining, non-voting 10% owned by Time Warner. Either Comcast or Disney would hold a controlling 60% share of Hulu following the deal with Fox.

Analysis of the Antitrust Risk of a Comcast/Fox Merger

According to the joint proxy statement, Fox’s board discounted Comcast’s original $34.36/share offer — but not the $28.00/share offer from Disney — because of “the level of regulatory issues posed and the proposed risk allocation arrangements.” Significantly on this basis, the Fox board determined Disney’s offer to be superior.

The claim that a merger with Comcast poses sufficiently greater antitrust risk than a purchase by Disney to warrant its rejection out of hand is unsupportable, however. From an antitrust perspective, it is even plausible that a Comcast acquisition of the Fox assets would be on more-solid ground than would be a Disney acquisition.

Vertical Mergers Generally Present Less Antitrust Risk

A merger between Comcast and Fox would be predominantly vertical, while a merger between Disney and Fox, in contrast, would be primarily horizontal. Generally speaking, it is easier to get antitrust approval for vertical mergers than it is for horizontal mergers. As Bruce Hoffman, Director of the FTC’s Bureau of Competition, noted earlier this year:

[V]ertical merger enforcement is still a small part of our merger workload….

There is a strong theoretical basis for horizontal enforcement because economic models predict at least nominal potential for anticompetitive effects due to elimination of horizontal competition between substitutes.

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

On its face, and consistent with the last quarter century of merger enforcement by the DOJ and FTC, the Comcast acquisition would be less likely to trigger antitrust scrutiny, and the Disney acquisition raises more straightforward antitrust issues.

This is true even in light of the fact that the DOJ decided to challenge the AT&T-Time Warner (AT&T/TWX) merger.

The AT&T/TWX merger is a single data point in a long history of successful vertical mergers that attracted little scrutiny, and no litigation, by antitrust enforcers (although several have been approved subject to consent orders).

Just because the DOJ challenged that one merger does not mean that antitrust enforcers generally, nor even the DOJ in particular, have suddenly become more hostile to vertical mergers.

Of particular importance to the conclusion that the AT&T/TWX merger challenge is of minimal relevance to predicting the DOJ’s reception in this case, the theory of harm argued by the DOJ in that case is far from well-accepted, while the potential theory that could underpin a challenge to a Disney/Fox merger is. As Bruce Hoffman further remarks:

I am skeptical of arguments that vertical mergers cause harm due to an increased bargaining skill; this is likely not an anticompetitive effect because it does not flow from a reduction in competition. I would contrast that to the elimination of competition in a horizontal merger that leads to an increase in bargaining leverage that could raise price or reduce output.

The Relatively Lower Risk of a Vertical Merger Challenge Hasn’t Changed Following the DOJ’s AT&T/Time Warner Challenge

Judge Leon is expected to rule on the AT&T/TWX merger in a matter of weeks. The theory underpinning the DOJ’s challenge is problematic (to say the least), and the case it presented was decidedly weak. But no litigated legal outcome is ever certain, and the court could, of course, rule against the merger nevertheless.

Yet even if the court does rule against the AT&T/TWX merger, this hardly suggests that a Comcast/Fox deal would create a greater antitrust risk than would a Disney/Fox merger.

A single successful challenge to a vertical merger — what would be, in fact, the first successful vertical merger challenge in four decades — doesn’t mean that the courts are becoming hostile to vertical mergers any more than the DOJ’s challenge means that vertical mergers suddenly entail heightened enforcement risk. Rather, it would simply mean that that, given the specific facts of the case, the DOJ was able to make out its prima facie case, and that the defendants were unable to rebut it.  

A ruling for the DOJ in the AT&T/TWX merger challenge would be rooted in a highly fact-specific analysis that could have no direct bearing on future cases.

In the AT&T/TWX case, the court’s decision will turn on its assessment of the DOJ’s argument that the merged firm could raise subscriber prices by a few pennies per subscriber. But as AT&T’s attorney aptly pointed out at trial (echoing the testimony of AT&T’s economist, Dennis Carlton):

The government’s modeled price increase is so negligible that, given the inherent uncertainty in that predictive exercise, it is not meaningfully distinguishable from zero.

Even minor deviations from the facts or the assumptions used in the AT&T/TWX case could completely upend the analysis — and there are important differences between the AT&T/TWX merger and a Comcast/Fox merger. True, both would be largely vertical mergers that would bring together programming and distribution assets in the home video market. But the foreclosure effects touted by the DOJ in the AT&T/TWX merger are seemingly either substantially smaller or entirely non-existent in the proposed Comcast/Fox merger.

Most importantly, the content at issue in AT&T/TWX is at least arguably (and, in fact, argued by the DOJ) “must have” programming — Time Warner’s premium HBO channels and its CNN news programming, in particular, were central to the DOJ’s foreclosure argument. By contrast, the programming that Comcast would pick up as a result of the proposed merger with Fox — FX (a popular, but non-essential, basic cable channel) and National Geographic channels (which attract a tiny fraction of cable viewing) — would be extremely unlikely to merit that designation.

Moreover, the DOJ made much of the fact that AT&T, through DirectTV, has a national distribution footprint. As a result, its analysis was dependent upon the company’s potential ability to attract new subscribers decamping from competing providers from whom it withholds access to Time Warner content in every market in the country. Comcast, on the other hand, provides cable service in only about 35% of the country. This significantly limits its ability to credibly threaten competitors because its ability to recoup lost licensing fees by picking up new subscribers is so much more limited.

And while some RSNs may offer some highly prized live sports programming, the mismatch between Comcast’s footprint and the FOX RSNs (only about 8 of the 22 Fox RSNs are in Comcast service areas) severely limits any ability or incentive the company would have to leverage that content for higher fees. Again, to the extent that RSN programming is not “must-have,” and to the extent there is not overlap between the RSN’s geographic area and Comcast’s service area, the situation is manifestly not the same as the one at issue in the AT&T/TWX merger.

In sum, a ruling in favor of the DOJ in the AT&T/TWX case would be far from decisive in predicting how the agency and the courts would assess any potential concerns arising from Comcast’s ownership of Fox’s assets.

A Comcast/Fox Deal May Entail Lower Antitrust Risk than a Disney/Fox Merger

As discussed below, concerns about antitrust enforcement risk from a Comcast/Fox merger are likely overstated. Perhaps more importantly, however, to the extent these concerns are legitimate, they apply at least as much to a Disney/Fox merger. There is, at minimum, no basis for assuming a Comcast deal would present any greater regulatory risk.

The Antitrust Risk of a Comcast/Fox Merger Is Likely Overstated

The primary theory upon which antitrust enforcers could conceivably base a Comcast/Fox merger challenge would be a vertical foreclosure theory. Importantly, such a challenge would have to be based on the incremental effect of adding the Fox assets to Comcast, and not on the basis of its existing assets. Thus, for example, antitrust enforcers would not be able to base a merger challenge on the possibility that Comcast could leverage NBC content it currently owns to extract higher fees from competitors. Rather, only if the combination of NBC programming with additional content from Fox could create a new antitrust risk would a case be tenable.

Enforcers would be unlikely to view the addition of FX and National Geographic to the portfolio of programming content Comcast currently owns as sufficient to raise concerns that the merger would give Comcast anticompetitive bargaining power or the ability to foreclose access to its content.

Although even less likely, enforcers could be concerned with the (horizontal) addition of 20th Century Fox filmed entertainment to Universal’s existing film production and distribution. But the theatrical film market is undeniably competitive, with the largest studio by revenue (Disney) last year holding only 22% of the market. The combination of 20th Century Fox with Universal would still result in a market share only around 25% based on 2017 revenues (and, depending on the year, not even result in the industry’s largest share).

There is also little reason to think that a Comcast controlling interest in Hulu would attract problematic antitrust attention. Comcast has already demonstrated an interest in diversifying its revenue across cable subscriptions and licensing, broadband subscriptions, and licensing to OVDs, as evidenced by its recent deal to offer Netflix as part of its Xfinity packages. Hulu likely presents just one more avenue for pursuing this same diversification strategy. And Universal has a history (see, e.g., this, this, and this) of very broad licensing across cable providers, cable networks, OVDs, and the like.

In the case of Hulu, moreover, the fact that Comcast is vertically integrated in broadband as well as cable service likely reduces the anticompetitive risk because more-attractive OVD content has the potential to increase demand for Comcast’s broadband service. Broadband offers larger margins (and is growing more rapidly) than cable, and it’s quite possible that any loss in Comcast’s cable subscriber revenue from Hulu’s success would be more than offset by gains in its content licensing and broadband subscription revenue. The same, of course, goes for Comcast’s incentives to license content to OVD competitors like Netflix: Comcast plausibly gains broadband subscription revenue from heightened consumer demand for Netflix, and this at least partially offsets any possible harm to Hulu from Netflix’s success.

At the same time, especially relative to Netflix’s vast library of original programming (an expected $8 billion worth in 2018 alone) and content licensed from other sources, the additional content Comcast would gain from a merger with Fox is not likely to appreciably increase its bargaining leverage or its ability to foreclose Netflix’s access to its content.     

Finally, Comcast’s ownership of Fox’s RSNs could, as noted, raise antitrust enforcers’ eyebrows. Enforcers could be concerned that Comcast would condition competitors’ access to RSN programming on higher licensing fees or prioritization of its NBC Sports channels.

While this is indeed a potential risk, it is hardly a foregone conclusion that it would draw an enforcement action. Among other things, NBC is far from the market leader, and improving its competitive position relative to ESPN could be viewed as a benefit of the deal. In any case, potential problems arising from ownership of the RSNs could easily be dealt with through divestiture or behavioral conditions; they are extremely unlikely to lead to an outright merger challenge.

The Antitrust Risk of a Disney Deal May Be Greater than Expected

While a Comcast/Fox deal doesn’t entail no antitrust enforcement risk, it certainly doesn’t entail sufficient risk to deem the deal dead on arrival. Moreover, it may entail less antitrust enforcement risk than would a Disney/Fox tie-up.

Yet, curiously, the joint proxy statement doesn’t mention any antitrust risk from the Disney deal at all and seems to suggest that the Fox board applied no risk discount in evaluating Disney’s bid.

Disney — already the market leader in the filmed entertainment industry — would acquire an even larger share of box office proceeds (and associated licensing revenues) through acquisition of Fox’s film properties. Perhaps even more important, the deal would bring the movie rights to almost all of the Marvel Universe within Disney’s ambit.

While, as suggested above, even that combination probably wouldn’t trigger any sort of market power presumption, it would certainly create an entity with a larger share of the market and stronger control of the industry’s most valuable franchises than would a Comcast/Fox deal.

Another relatively larger complication for a Disney/Fox merger arises from the prospect of combining Fox’s RSNs with ESPN. Whatever ability or incentive either company would have to engage in anticompetitive conduct surrounding sports programming, that risk would seem to be more significant for undisputed market leader, Disney. At the same time, although still powerful, demand for ESPN on cable has been flagging. Disney could well see the ability to bundle ESPN with regional sports content as a way to prop up subscription revenues for ESPN — a practice, in fact, that it has employed successfully in the past.   

Finally, it must be noted that licensing of consumer products is an even bigger driver of revenue from filmed entertainment than is theatrical release. No other company comes close to Disney in this space.

Disney is the world’s largest licensor, earning almost $57 billion in 2016 from licensing properties like Star Wars and Marvel Comics. Universal is in a distant 7th place, with 2016 licensing revenue of about $6 billion. Adding Fox’s (admittedly relatively small) licensing business would enhance Disney’s substantial lead (even the number two global licensor, Meredith, earned less than half of Disney’s licensing revenue in 2016). Again, this is unlikely to be a significant concern for antitrust enforcers, but it is notable that, to the extent it might be an issue, it is one that applies to Disney and not Comcast.

Conclusion

Although I hope to address these issues in greater detail in the future, for now the preliminary assessment is clear: There is no legitimate basis for ascribing a greater antitrust risk to a Comcast/Fox deal than to a Disney/Fox deal.

At this point, only the most masochistic and cynical among DC’s policy elite actually desire for the net neutrality conflict to continue. And yet, despite claims that net neutrality principles are critical to protecting consumers, passage of the current Congressional Review Act (“CRA”) disapproval resolution in Congress would undermine consumer protection and promise only to drag out the fight even longer.

The CRA resolution is primarily intended to roll back the FCC’s re-re-classification of broadband as a Title I service under the Communications Act in the Restoring Internet Freedom Order (“RIFO”). The CRA allows Congress to vote to repeal rules recently adopted by federal agencies; upon a successful CRA vote, the rules are rescinded and the agency is prohibited from adopting substantially similar rules in the future.

But, as TechFreedom has noted, it’s not completely clear that a CRA on a regulatory classification decision will work quite the way Congress intends it and could just trigger more litigation cycles, largely because it is unclear what parts of the RIFO are actually “rules” subject to the CRA. Harold Feld has written a critique of TechFreedom’s position, arguing, in effect, that of course the RIFO is a rule; TechFreedom responded with a pretty devastating rejoinder.

But this exchange really demonstrates TechFreedom’s central argument: It is sufficiently unclear how or whether the CRA will apply to the various provisions of the RIFO, such that the only things the CRA is guaranteed to do are 1) to strip consumers of certain important protections — it would take away the FCC’s transparency requirements for ISPs, and imperil privacy protections currently ensured by the FTC — while 2) prolonging the already interminable litigation and political back-and-forth over net neutrality.

The CRA is political theater

The CRA resolution effort is not about good Internet regulatory policy; rather, it’s pure political opportunism ahead of the midterms. Democrats have recognized net neutrality as a good wedge issue because of its low political opportunity cost. The highest-impact costs of over-regulating broadband through classification decisions are hard to see: Rather than bad things happening, the costs arrive in the form of good things not happening. Eventually those costs work their way to customers through higher access prices or less service — especially in rural areas most in need of it — but even these effects take time to show up and, when they do, are difficult to pin on any particular net neutrality decision, including the CRA resolution. Thus, measured in electoral time scales, prolonging net neutrality as a painful political issue — even though actual resolution of the process by legislation would be the sensible course — offers tremendous upside for political challengers and little cost.  

The truth is, there is widespread agreement that net neutrality issues need to be addressed by Congress: A constant back and forth between the FCC (and across its own administrations) and the courts runs counter to the interests of consumers, broadband companies, and edge providers alike. Virtually whatever that legislative solution ends up looking like, it would be an improvement over the unstable status quo.

There have been various proposals from Republicans and Democrats — many of which contain provisions that are likely bad ideas — but in the end, a bill passed with bipartisan input should have the virtue of capturing an open public debate on the issue. Legislation won’t be perfect, but it will be tremendously better than the advocacy playground that net neutrality has become.

What would the CRA accomplish?

Regardless of what one thinks of the substantive merits of TechFreedom’s arguments on the CRA and the arcana of legislative language distinguishing between agency “rules” and “orders,” if the CRA resolution is successful (a prospect that is a bit more likely following the Senate vote to pass it) what follows is pretty clear.

The only certain result of the the CRA resolution becoming law would be to void the transparency provisions that the FCC introduced in the RIFO — the one part of the Order that is pretty clearly a “rule” subject to CRA review — and it would disable the FCC from offering another transparency rule in its place. Everything else is going to end up — surprise! — before the courts, which would serve only to keep the issues surrounding net neutrality unsettled for another several years. (A cynic might suggest that this is, in fact, the goal of net neutrality proponents, for whom net neutrality has been and continues to have important political valence.)

And if the CRA resolution withstands the inevitable legal challenge to its rescision of the rest of the RIFO, it would also (once again) remove broadband privacy from the FTC’s purview, placing it back into the FCC’s lap — which is already prohibited from adopting privacy rules following last year’s successful CRA resolution undoing the Wheeler FCC’s broadband privacy regulations. The result is that we could be left without any broadband privacy regulator at all — presumably not the outcome strong net neutrality proponents want — but they persevere nonetheless.

Moreover, TechFreedom’s argument that the CRA may not apply to all parts of the RIFO could have a major effect on whether or not Congress is even accomplishing anything at all (other than scoring political points) with this vote. It could be the case that the CRA applies only to “rules” and not “orders,” or it could be the case that even if the CRA does apply to the RIFO, its passage would not force the FCC to revive the abrogated 2015 Open Internet Order, as proponents of the CRA vote hope.

Whatever one thinks of these arguments, however, they are based on a sound reading of the law and present substantial enough questions to sustain lengthy court challenges. Thus, far from a CRA vote actually putting to rest the net neutrality issue, it is likely to spawn litigation that will drag out the classification uncertainty question for at least another year (and probably more, with appeals).

Stop playing net neutrality games — they aren’t fun

Congress needs to stop trying to score easy political points on this issue while avoiding the hard and divisive work of reaching a compromise on actual net neutrality legislation. Despite how the CRA is presented in the popular media, a CRA vote is the furthest thing from a simple vote for net neutrality: It’s a political calculation to avoid accountability.

I had the pleasure last month of hosting the first of a new annual roundtable discussion series on closing the rural digital divide through the University of Nebraska’s Space, Cyber, and Telecom Law Program. The purpose of the roundtable was to convene a diverse group of stakeholders — from farmers to federal regulators; from small municipal ISPs to billion dollar app developers — for a discussion of the on-the-ground reality of closing the rural digital divide.

The impetus behind the roundtable was, quite simply, that in my five years living in Nebraska I have consistently found that the discussions that we have here about the digital divide in rural America are wholly unlike those that the federally-focused policy crowd has back in DC. Every conversation I have with rural stakeholders further reinforces my belief that those of us who approach the rural digital divide from the “DC perspective” fail to appreciate the challenges that rural America faces or the drive, innovation, and resourcefulness that rural stakeholders bring to the issue when DC isn’t looking. So I wanted to bring these disparate groups together to see what was driving this disconnect, and what to do about it.

The unfortunate reality of the rural digital divide is that it is an existential concern for much of America. At the same time, the positive news is that closing this divide has become an all-hands-on-deck effort for stakeholders in rural America, one that defies caricatured political, technological, and industry divides. I have never seen as much agreement and goodwill among stakeholders in any telecom community as when I speak to rural stakeholders about digital divides. I am far from an expert in rural broadband issues — and I don’t mean to hold myself out as one — but as I have engaged with those who are, I am increasingly convinced that there are far more and far better ideas about closing the rural digital divide to be found outside the beltway than within.

The practical reality is that most policy discussions about the rural digital divide over the past decade have been largely irrelevant to the realities on the ground: The legal and policy frameworks focus on the wrong things, and participants in these discussions at the federal level rarely understand the challenges that define the rural divide. As a result, stakeholders almost always fall back on advocating stale, entrenched, viewpoints that have little relevance to the on-the-ground needs. (To their credit, both Chairman Pai and Commissioner Carr have demonstrated a longstanding interest in understanding the rural digital divide — an interest that is recognized and appreciated by almost every rural stakeholder I speak to.)

Framing Things Wrong

It is important to begin by recognizing that contemporary discussion about the digital divide is framed in terms of, and addressed alongside, longstanding federal Universal Service policy. This policy, which has its roots in the 20th century project of ensuring that all Americans had access to basic telephone service, is enshrined in the first words of the Communications Act of 1934. It has not significantly evolved from its origins in the analog telephone system — and that’s a problem.

A brief history of Universal Service

The Communications Act established the FCC

for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States … a rapid, efficient, Nation-wide, and world-wide wire and radio communication service ….

The historic goal of “universal service” has been to ensure that anyone in the country is able to connect to the public switched telephone network. In the telephone age, that network provided only one primary last-mile service: transmitting basic voice communications from the customer’s telephone to the carrier’s switch. Once at the switch various other services could be offered — but providing them didn’t require more than a basic analog voice circuit to the customer’s home.

For most of the 20th century, this form of universal service was ensured by fiat and cost recovery. Regulated telephone carriers (that is, primarily, the Bell operating companies under the umbrella of AT&T) were required by the FCC to provide service to all comers, at published rates, no matter the cost of providing that service. In exchange, the carriers were allowed to recover the cost of providing service to high-cost areas through the regulated rates charged to all customers. That is, the cost of ensuring universal service was spread across and subsidized by the entire rate base.

This system fell apart following the break-up of AT&T in the 1980s. The separation of long distance from local exchange service meant that the main form of cross subsidy — from long distance to local callers — could no longer be handled implicitly. Moreover, as competitive exchange services began entering the market, they tended to compete first, and most, over the high-revenue customers who had supported the rate base. To accommodate these changes, the FCC transitioned from a model of implicit cross-subsidies to one of explicit cross-subsidies, introducing long distance access charges and termination fees that were regulated to ensure money continued to flow to support local exchange carriers’ costs of providing services to high-cost users.

The 1996 Telecom Act forced even more dramatic change. The goal of the 1996 Telecom Act was to introduce competition throughout the telecom ecosystem — but the traditional cross-subsidy model doesn’t work in a competitive market. So the 1996 Telecom Act further evolved the FCC’s universal service mechanism, establishing the Universal Service Fund (USF), funded by fees charged to all telecommunications carriers, which would be apportioned to cover the costs incurred by eligible telecommunications carriers in providing high-cost (and other “universal”) services.

The problematic framing of Universal Service

For present purposes, we need not delve into these mechanisms. Rather, the very point of this post is that the interminable debates about these mechanisms — who pays into the USF and how much; who gets paid out of the fund and how much; and what services and technologies the fund covers — simply don’t match the policy challenges of closing the digital divide.

What the 1996 Telecom Act does offer is a statement of the purposes of Universal Service. In 47 USC 254(b)(3), the Act states the purpose of ensuring “Access in rural and high cost areas”:

Consumers in all regions of the Nation, including low-income consumers and those in rural, insular, and high cost areas, should have access to telecommunications and information services … that are reasonably comparable to those services provided in urban areas ….

This is a problematic framing. (I would actually call it patently offensive…). It is a framing that made sense in the telephone era, when ensuring last-mile service meant providing only basic voice telephone service. In that era, having any service meant having all service, and the primary obstacles to overcome were the high-cost of service to remote areas and the lower revenues expected from lower-income areas. But its implicit suggestion is that the goal of federal policy should be to make rural America look like urban America.

Today universal service, at least from the perspective of closing the digital divide, means something different, however. The technological needs of rural America are different than those of urban America; the technological needs of poor and lower-income America are different than those of rich America. Framing the goal in terms of making sure rural and lower-income America have access to the same services as urban and wealthy America is, by definition, not responsive to (or respectful of) the needs of those who are on the wrong side of one of this country’s many digital divides. Indeed, that goal almost certainly distracts from and misallocates resources that could be better leveraged towards closing these divides.

The Demands of Rural Broadband

Rural broadband needs are simultaneously both more and less demanding than the services we typically focus on when discussing universal service. The services that we fund, and the way that we approach how to close digital divides, needs to be based in the first instance on the actual needs of the community that connectivity is meant to serve. Take just two of the prototypical examples: precision and automated farming, and telemedicine.

Assessing rural broadband needs

Precision agriculture requires different networks than does watching Netflix, web surfing, or playing video games. Farms with hundreds or thousands of sensors and other devices per acre can put significant load on networks — but not in terms of bandwidth. The load is instead measured in terms of packets and connections per second. Provisioning networks to handle lots of small packets is very different from provisioning them to handle other, more-typical (to the DC crowd), use cases.

On the other end of the agricultural spectrum, many farms don’t own their own combines. Combines cost upwards of a million dollars. One modern combine is sufficient to tend to several hundred acres in a given farming season. It is common for many farmers to hire someone who owns a combine to service their fields. During harvest season, for instance, one combine service may operate on a dozen farms during harvest season. Prior to operation, modern precision systems need to download a great deal of GIS, mapping, weather, crop, and other data. High-speed Internet can literally mean the difference between letting a combine sit idle for many days of a harvest season while it downloads data and servicing enough fields to cover the debt payments on a million dollar piece of equipment.

Going to the other extreme, rural health care relies upon Internet connectivity — but not in the ways it is usually discussed. The stories one hears on the ground aren’t about the need for particularly high-speed connections or specialized low-latency connections to allow remote doctors to control surgical robots. While tele-surgery and access to highly specialized doctors are important applications of telemedicine, the urgent needs today are far more modest: simple video consultations with primary care physicians for routine care, requiring only a moderate-speed Internet connection capable of basic video conferencing. In reality, literally megabits per second (not even 10 mbps) can mean the difference between a remote primary care physician being able to provide basic health services to a rural community and that community going entirely unserved by a doctor.

Efforts to run gigabit connections and dedicated fiber to rural health care facilities may be a great long-term vision — but the on-the-ground need could be served by a reliable 4G wireless connection or DSL line. (Again, to their credit, this is a point that Chairman Pai and Commissioner Carr have been highlighting in their recent travels through rural parts of the country.)

Of course, rural America faces many of the same digital divides faced elsewhere. Even in the wealthiest cities in Nebraska, for instance, significant numbers of students are eligible for free or reduced price school lunches — a metric that corresponds with income — and rely on anchor institutions for Internet access. The problem is worse in much of rural Nebraska, where there may simply be no Internet access at all.

Addressing rural broadband needs

Two things in particular have struck me as I have spoken to rural stakeholders about the digital divide. The first is that this is an “all hands on deck” problem. Everyone I speak to understands the importance of the issue. Everyone is willing to work with and learn from others. Everyone is willing to commit resources and capital to improve upon the status quo, including by undertaking experiments and incurring risks.

The discussions I have in DC, however, including with and among key participants in the DC policy firmament, are fundamentally different. These discussions focus on tweaking contribution factors and cost models to protect or secure revenues; they are, in short, missing the forest for the trees. Meanwhile, the discussion on the ground focuses on how to actually deploy service and overcome obstacles. No amount of cost-model tweaking will do much at all to accomplish either of these.

The second striking, and rather counterintuitive, thing that I have often heard is that closing the rural digital divide isn’t (just) about money. I’ve heard several times the lament that we need to stop throwing more money at the problem and start thinking about where the money we already have needs to go. Another version of this is that it isn’t about the money, it’s about the business case. Money can influence a decision whether to execute upon a project for which there is a business case — but it rarely creates a business case where there isn’t one. And where it has created a business case, that case was often for building out relatively unimportant networks while increasing the opportunity costs of building out more important networks. The networks we need to build are different from those envisioned by the 1996 Telecom Act or FCC efforts to contort that Act to fund Internet build-out.

Rural Broadband Investment

There is, in fact, a third particularly striking thing I have gleaned from speaking with rural stakeholders, and rural providers in particular: They don’t really care about net neutrality, and don’t see it as helpful to closing the digital divide.  

Rural providers, it must be noted, are generally “pro net neutrality,” in the sense that they don’t think that ISPs should interfere with traffic going over their networks; in the sense that they don’t have any plans themselves to engage in “non-neutral” conduct; and also in the sense that they don’t see a business case for such conduct.

But they are also wary of Title II regulation, or of other rules that are potentially burdensome or that introduce uncertainty into their business. They are particularly concerned that Title II regulation opens the door to — and thus creates significant uncertainty about the possibility of — other forms of significant federal regulation of their businesses.

More than anything else, they want to stop thinking, talking, and worrying about net neutrality regulations. Ultimately, the past decade of fights about net neutrality has meant little other than regulatory cost and uncertainty for them, which makes planning and investment difficult — hardly a boon to closing the digital divide.

The basic theory of the Wheeler-era FCC’s net neutrality regulations was the virtuous cycle — that net neutrality rules gave edge providers the certainty they needed in order to invest in developing new applications that, in turn, would drive demand for, and thus buildout of, new networks. But carriers need certainty, too, if they are going to invest capital in building these networks. Rural ISPs are looking for the business case to justify new builds. Increasing uncertainty has only negative effects on the business case for closing the rural digital divide.

Most crucially, the logic of the virtuous cycle is virtually irrelevant to driving demand for closing the digital divide. Edge innovation isn’t going to create so much more value that users will suddenly demand that networks be built; rather, the applications justifying this demand already exist, and most have existed for many years. What stands in the way of the build-out required to service under- or un-served rural areas is the business case for building these (expensive) networks. And the uncertainty and cost associated with net neutrality only exacerbate this problem.

Indeed, rural markets are an area where the virtuous cycle very likely turns in the other direction. Rural communities are actually hotbeds of innovation. And they know their needs far better than Silicon Valley edge companies, so they are likely to build apps and services that better cater to the unique needs of rural America. But these apps and services aren’t going to be built unless their developers have access to the broadband connections needed to build and maintain them, and, most important of all, unless users have access to the broadband connections needed to actually make use of them. The upshot is that, in rural markets, connectivity precedes and drives the supply of edge services not, as the Wheeler-era virtuous cycle would have it, the other way around.

The effect of Washington’s obsession with net neutrality these past many years has been to increase uncertainty and reduce the business case for building new networks. And its detrimental effects continue today with politicized and showboating efforts to to invoke the Congressional Review Act in order to make a political display of the 2017 Restoring Internet Freedom Order. Back in the real world, however, none of this helps to provide rural communities with the type of broadband services they actually need, and the effect is only to worsen the rural digital divide, both politically and technologically.

The Road Ahead …?

The story told above is not a happy one. Closing digital divides, and especially closing the rural digital divide, is one of the most important legal, social, and policy challenges this country faces. Yet the discussion about these issues in DC reflects little of the on-the-ground reality. Rather advocates in DC attack a strawman of the rural digital divide, using it as a foil to protect and advocate for their pet agendas. If anything, the discussion in DC distracts attention and diverts resources from productive ideas.

To end on a more positive note, some are beginning to recognize the importance and direness of the situation. I have noted several times the work of Chairman Pai and Commissioner Carr. Indeed, the first time I met Chairman Pai was when I had the opportunity to accompany him, back when he was Commissioner Pai, on a visit through Diller, Nebraska (pop. 287). More recently, there has been bipartisan recognition of the need for new thinking about the rural digital divide. In February, for instance, a group of Democratic senators asked President Trump to prioritize rural broadband in his infrastructure plans. And the following month Congress enacted, and the President signed, legislation that among other things funded a $600 million pilot program to award grants and loans for rural broadband built out through the Department of Agriculture’s Rural Utilities Service. But both of these efforts rely too heavily on throwing money at the rural divide (speaking of the recent legislation, the head of one Nebraska-based carrier building out service in rural areas lamented that it’s just another effort to give carriers cheap money, which doesn’t do much to help close the divide!). It is, nonetheless, good to see urgent calls for and an interest in experimenting with new ways to deliver assistance in closing the rural digital divide. We need more of this sort of bipartisan thinking and willingness to experiment with new modes of meeting this challenge — and less advocacy for stale, entrenched, viewpoints that have little relevance to the on-the-ground reality of rural America.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

The U.S. Federal Trade Commission’s (FTC) well-recognized expertise in assessing unfair or deceptive acts or practices can play a vital role in policing abusive broadband practices.  Unfortunately, however, because Section 5(a)(2) of the FTC Act exempts common carriers from the FTC’s jurisdiction, serious questions have been raised about the FTC’s authority to deal with unfair or deceptive practices in cyberspace that are carried out by common carriers, but involve non-common-carrier activity (in contrast, common carrier services have highly regulated terms and must be made available to all potential customers).

Commendably, the Ninth Circuit held on February 26, in FTC v. AT&T Mobility, that harmful broadband data throttling practices by a common carrier were subject to the FTC’s unfair acts or practices jurisdiction, because the common carrier exception is “activity-based,” and the practices in question did not involve common carrier services.  Key excerpts from the summary of the Ninth Circuit’s opinion follow:

The en banc court affirmed the district court’s denial of AT&T Mobility’s motion to dismiss an action brought by th Federal Trade Commission (“FTC”) under Section 5 of the FTC Act, alleging that AT&T’s data-throttling plan was unfair and deceptive. AT&T Mobility’s data-throttling is a practice by which the company reduced customers’ broadband data speed without regard to actual network congestion. Section 5 of the FTC Act gives the agency enforcement authority over “unfair or deceptive acts or practices,” but exempts “common carriers subject to the Acts to regulate commerce.” 15 U.S.C § 45(a)(1), (2). AT&T moved to dismiss the action, arguing that it was exempt from FTC regulation under Section 5. . . .

The en banc court held that the FTC Act’s common carrier exemption was activity-based, and therefore the phrase “common carriers subject to the Acts to regulate commerce” provided immunity from FTC regulation only to the extent that a common carrier was engaging in common carrier services. In reaching this conclusion, the en banc court looked to the FTC Act’s text, the meaning of “common carrier” according to the courts around the time the statute was passed in 1914, decades of judicial interpretation, the expertise of the FTC and Federal Communications Commission (“FCC”), and legislative history.

Addressing the FCC’s order, issued on March 12, 2015, reclassifying mobile data service from a non-common carriage service to a common carriage service, the en banc court held that the prospective reclassification order did not rob the FTC of its jurisdiction or authority over conduct occurring before the order. Accordingly, the en banc court affirmed the district court’s denial of AT&T’s motion to dismiss.

A key introductory paragraph in the Ninth Circuit’s opinion underscores the importance of the court’s holding for sound regulatory policy:

This statutory interpretation [that the common carrier exception is activity-based] also accords with common sense. The FTC is the leading federal consumer protection agency and, for many decades, has been the chief federal agency on privacy policy and enforcement. Permitting the FTC to oversee unfair and deceptive non-common-carriage practices of telecommunications companies has practical ramifications. New technologies have spawned new regulatory challenges. A phone company is no longer just a phone company. The transformation of information services and the ubiquity of digital technology mean that telecommunications operators have expanded into website operation, video distribution, news and entertainment production, interactive entertainment services and devices, home security and more. Reaffirming FTC jurisdiction over activities that fall outside of common-carrier services avoids regulatory gaps and provides consistency and predictability in regulatory enforcement.

But what can the FTC do about unfair or deceptive practices affecting broadband services, offered by common carriers, subsequent to the FCC’s 2015 reclassification of mobile data service as a common carriage service?  The FTC will be able to act, assuming that the Federal Communications Commission’s December 2017 rulemaking, reclassifying mobile broadband Internet access service as not involving a common carrier service, passes legal muster (as it should).  In order to avoid any legal uncertainty, however, Congress could take the simple step of eliminating the FTC Act’s common carrier exception – an outdated relic that threatens to generate disparate enforcement outcomes toward the same abusive broadband practice, based merely upon whether the parent company is deemed a “common carrier.”

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

Just in time for tomorrow’s FCC vote on repeal of its order classifying Internet Service Providers as common carriers, the St. Louis Post-Dispatch has published my op-ed entitled The FCC Should Abandon Title II and Return to Antitrust.

Here’s the full text:

The Federal Communications Commission (FCC) will soon vote on whether to repeal an Obama-era rule classifying Internet Service Providers (ISPs) as “common carriers.” That rule was put in place to achieve net neutrality, an attractive-sounding goal that many Americans—millennials especially—reflexively support.

In Missouri, voices as diverse as the St. Louis Post-Dispatch, the Joplin Globe, and the Archdiocese of St. Louis have opposed repeal of the Obama-era rule.

Unfortunately, few people who express support for net neutrality understand all it entails. Even fewer recognize the significant dangers of pursuing net neutrality using the means the Obama-era FCC selected. All many know is that they like neutrality generally and that smart-sounding celebrities like John Oliver support the Obama-era rule. They really need to know more.

First, it’s important to understand what a policy of net neutrality entails. In essence, it prevents ISPs from providing faster or better transmission of some Internet content, even where the favored content provider is willing to pay for prioritization.

That sounds benign—laudable, even—until one considers all that such a policy prevents. Under strict net neutrality, an ISP couldn’t prioritize content transmission in which congestion delays ruin the user experience (say, an Internet videoconference between a telemedicine system operated by the University of Missouri hospital and a rural resident of Dent County) over transmissions in which delays are less detrimental (say, downloads from a photo-sharing site).
Strict net neutrality would also preclude a mobile broadband provider from exempting popular content providers from data caps. Indeed, T-Mobile was hauled before the FCC to justify its popular “Binge On” service, which offered cost-conscious subscribers unlimited access to Netflix, ESPN, and HBO.

The fact is, ISPs have an incentive to manage their traffic in whatever way most pleases subscribers. The vast majority of Americans have a choice of ISPs, so managing content in any manner that adversely affects the consumer experience would hurt business. ISPs are also motivated to design subscription packages that consumers most desire. They shouldn’t have to seek government approval of innovative offerings.

For evidence that competition protects consumers from harmful instances of non-neutral network management, consider the record. The commercial Internet was born, thrived, and became the brightest spot in the American economy without formal net neutrality rules. History provides little reason to believe that the parade of horribles net neutrality advocates imagine will ever materialize.

Indeed, in seeking to justify its net neutrality policies, the Obama era FCC could come up with only four instances of harmful non-neutral network management over the entire history of the commercial Internet. That should come as no surprise. Background antitrust rules, in place long before the Internet was born, forbid the speculative harms net neutrality advocates envision.

Even if net neutrality regulation were desirable as a policy matter, the means by which the FCC secured it was entirely inappropriate. Before it adopted the current approach, which reclassified ISPs as common carriers subject to Title II of the 1934 Communications Act, the FCC was crafting a narrower approach using authority granted by the 1996 Telecommunications Act.

It abruptly changed course after President Obama, reeling from a shellacking in the 2014 midterm elections, sought to shore up his base by posting a video calling for “the strongest possible rules” on net neutrality, including Title II reclassification. Prodded by the President, the supposedly independent commissioners abandoned their consensus that Title II was too extreme and voted along party lines to treat the Internet as a utility.

Title II reclassification has resulted in the sort of “Mother, may I?” regulatory approach that impedes innovation and investment. In the first half of 2015, as the Commission was formulating its new Title II approach, spending by ISPs on capital equipment fell by an average of 8%. That was only the third time in the history of the commercial Internet that infrastructure investment fell from the previous year. The other two times were in 2001, following the dot.com bust, and 2009, after the 2008 financial crash and ensuing recession. For those remote communities in Missouri still looking for broadband to reach their doorsteps, government policies need to incentivize more investment, not restrict it.

To enhance innovation and encourage broadband deployment, the FCC should reverse its damaging Title II order and leave concerns about non-neutral network management to antitrust law. It was doing just fine.