Archives For telecommunications

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.

The Department of Justice announced it has approved the $26 billion T-Mobile/Sprint merger. Once completed, the deal will create a mobile carrier with around 136 million customers in the U.S., putting it just behind Verizon (158 million) and AT&T (156 million).

While all the relevant federal government agencies have now approved the merger, it still faces a legal challenge from state attorneys general. At the very least, this challenge is likely to delay the merger; if successful, it could scupper it. In this blog post, we evaluate the state AG’s claims (and find them wanting).

Four firms good, three firms bad?

The state AG’s opposition to the T-Mobile/Sprint merger is based on a claim that a competitive mobile market requires four national providers, as articulated in their redacted complaint:

The Big Four MNOs [mobile network operators] compete on many dimensions, including price, network quality, network coverage, and features. The aggressive competition between them has resulted in falling prices and improved quality. The competition that currently takes place across those dimensions, and others, among the Big Four MNOs would be negatively impacted if the Merger were consummated. The effects of the harm to competition on consumers will be significant because the Big Four MNOs have wireless service revenues of more than $160 billion.

. . . 

Market consolidation from four to three MNOs would also serve to increase the possibility of tacit collusion in the markets for retail mobile wireless telecommunications services.

But there are no economic grounds for the assertion that a four firm industry is on a competitive tipping point. Four is an arbitrary number, offered up in order to squelch any further concentration in the industry.

A proper assessment of this transaction—as well as any other telecom merger—requires accounting for the specific characteristics of the markets affected by the merger. The accounting would include, most importantly, the dynamic, fast-moving nature of competition and the key role played by high fixed costs of production and economies of scale. This is especially important given the expectation that the merger will facilitate the launch of a competitive, national 5G network.

Opponents claim this merger takes us from four to three national carriers. But Sprint was never a serious participant in the launch of 5G. Thus, in terms of future investment in general, and the roll-out of 5G in particular, a better characterization is that it this deal takes the U.S. from two to three national carriers investing to build out next-generation networks.

In the past, the capital expenditures made by AT&T and Verizon have dwarfed those of T-Mobile and Sprint. But a combined T-Mobile/Sprint would be in a far better position to make the kinds of large-scale investments necessary to develop a nationwide 5G network. As a result, it is likely that both the urban-rural digital divide and the rich-poor digital divide will decline following the merger. And this investment will drive competition with AT&T and Verizon, leading to innovation, improving service and–over time–lowering the cost of access.

Is prepaid a separate market?

The state AGs complain that the merger would disproportionately affect consumers of prepaid plans, which they claim constitutes a separate product market:

There are differences between prepaid and postpaid service, the most notable being that individuals who cannot pass a credit check and/or who do not have a history of bill payment with a MNO may not be eligible for postpaid service. Accordingly, it is informative to look at prepaid mobile wireless telecommunications services as a separate segment of the market for mobile wireless telecommunications services.

Claims that prepaid services constitute a separate market are questionable, at best. While at one time there might have been a fairly distinct divide between pre and postpaid markets, today the line between them is at least blurry, and may not even be a meaningful divide at all.

To begin with, the arguments regarding any expected monopolization in the prepaid market appear to assume that the postpaid market imposes no competitive constraint on the prepaid market. 

But that can’t literally be true. At the very least, postpaid plans put a ceiling on prepaid prices for many prepaid users. To be sure, there are some prepaid consumers who don’t have the credit history required to participate in the postpaid market at all. But these are inframarginal consumers, and they will benefit from the extent of competition at the margins unless operators can effectively price discriminate in ways they have not in the past, and which has not been demonstrated is possible or likely.

One source of this competition will come from Dish, which has been a vocal critic of the T-Mobile/Sprint merger. Under the deal with DOJ, T-Mobile and Sprint must spin-off Sprint’s prepaid businesses to Dish. The divested products include Boost Mobile, Virgin Mobile, and Sprint prepaid. Moreover the deal requires Dish be allowed to use T-Mobile’s network during a seven-year transition period. 

Will the merger harm low-income consumers?

While the states’ complaint alleges that low-income consumers will suffer, it pays little attention to the so-called “digital divide” separating urban and rural consumers. This seems curious given the attention it was given in submissions to the federal agencies. For example, the Communication Workers of America opined:

the data in the Applicants’ Public Interest Statement demonstrates that even six years after a T-Mobile/Sprint merger, “most of New T-Mobile’s rural customers would be forced to settle for a service that has significantly lower performance than the urban and suburban parts of the network.” The “digital divide” is likely to worsen, not improve, post-merger.

This is merely an assertion, and a misleading assertion. To the extent the “digital divide” would grow following the merger, it would be because urban access will improve more rapidly than rural access would improve. 

Indeed, there is no real suggestion that the merger will impede rural access relative to a world in which T-Mobile and Sprint do not merge. 

And yet, in the absence of a merger, Sprint would be less able to utilize its own spectrum in rural areas than would the merged T-Mobile/Sprint, because utilization of that spectrum would require substantial investment in new infrastructure and additional, different spectrum. And much of that infrastructure and spectrum is already owned by T-Mobile. 

It likely that the combined T-Mobile/Sprint will make that investment, given the cost savings that are expected to be realized through the merger. So, while it might be true that urban customers will benefit more from the merger, rural customers will also benefit. It is impossible to know, of course, by exactly how much each group will benefit. But, prima facie, the prospect of improvement in rural access seems a strong argument in favor of the merger from a public interest standpoint.

The merger is also likely to reduce another digital divide: that between wealthier and poorer consumers in more urban areas. The proportion of U.S. households with access to the Internet has for several years been rising faster among those with lower incomes than those with higher incomes, thereby narrowing this divide. Since 2011, access by households earning $25,000 or less has risen from 52% to 62%, while access among the U.S. population as a whole has risen only from 72% to 78%. In part, this has likely resulted from increased mobile access (a greater proportion of Americans now access the Internet from mobile devices than from laptops), which in turn is the result of widely available, low-cost smartphones and the declining cost of mobile data.

Concluding remarks

By enabling the creation of a true, third national mobile (phone and data) network, the merger will almost certainly drive competition and innovation that will lead to better services at lower prices, thereby expanding access for all and, if current trends hold, especially those on lower incomes. Beyond its effect on the “digital divide” per se, the merger is likely to have broadly positive effects on access more generally.

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

In the opening seconds of what was surely one of the worst oral arguments in a high-profile case that I have ever heard, Pantelis Michalopoulos, arguing for petitioners against the FCC’s 2018 Restoring Internet Freedom Order (RIFO) expertly captured both why the side he was representing should lose and the overall absurdity of the entire net neutrality debate: “This order is a stab in the heart of the Communications Act. It would literally write ‘telecommunications’ out of the law. It would end the communications agency’s oversight over the main communications service of our time.”

The main communications service of our time is the Internet. The Communications and Telecommunications Acts were written before the advent of the modern Internet, for an era when the telephone was the main communications service of our time. The reality is that technological evolution has written “telecommunications” out of these Acts – the “telecommunications services” they were written to regulate are no longer the important communications services of the day.

The basic question of the net neutrality debate is whether we expect Congress to weigh in on how regulators should respond when an industry undergoes fundamental change, or whether we should instead allow those regulators to redefine the scope of their own authority. In the RIFO case, petitioners (and, more generally, net neutrality proponents) argue that agencies should get to define their own authority. Those on the other side of the issue (including me) argue that that it is up to Congress to provide agencies with guidance in response to changing circumstances – and worry that allowing independent and executive branch agencies broad authority to act without Congressional direction is a recipe for unfettered, unchecked, and fundamentally abusive concentrations of power in the hands of the executive branch.

These arguments were central to the DC Circuit’s evaluation of the prior FCC net neutrality order – the Open Internet Order. But rather than consider the core issue of the case, the four hours of oral arguments this past Friday were instead a relitigation of long-ago addressed ephemeral distinctions, padded out with irrelevance and esoterica, and argued with a passion available only to those who believe in faerie tales and monsters under their bed. Perhaps some revelled in hearing counsel for both sides clumsily fumble through strained explanations of the difference between standalone telecommunications services and information services that are by definition integrated with them, or awkward discussions about how ISPs may implement hypothetical prioritization technologies that have not even been developed. These well worn arguments successfully demonstrated, once again, how many angels can dance upon the head of a single pin – only never before have so many angels been so irrelevant.

This time around, petitioners challenging the order were able to scare up some intervenors to make novel arguments on their behalf. Most notably, they were able to scare up a group of public safety officials to argue that the FCC had failed to consider arguments that the RIFO would jeopardize public safety services that rely on communications networks. I keep using the word “scare” because these arguments are based upon incoherent fears peddled by net neutrality advocates in order to find unsophisticated parties to sign on to their policy adventures. The public safety fears are about as legitimate as concerns that the Easter Bunny might one day win the Preakness – and merited as much response from the FCC as a petition from the Racehorse Association of America demanding the FCC regulate rabbits.

In the end, I have no idea how the DC Circuit is going to come down in this case. Public Safety concerns – like declarations of national emergencies – are often given undue and unwise weight. And there is a legitimately puzzling, if fundamentally academic, argument about a provision of the Communications Act (47 USC 257(c)) that Congress repealed after the Order was adopted and that was an noteworthy part of the notice the FCC gave when the Order was proposed that could lead the Court to remand the Order back to the Commission.

In the end, however, this case is unlikely to address the fundamental question of whether the FCC has any business regulating Internet access services. If the FCC loses, we’ll be back here in another year or two; if the FCC wins, we’ll be back here the next time a Democrat is in the White House. And the real tragedy is that every minute the FCC spends on the interminable net neutrality non-debate is a minute not spent on issues like closing the rural digital divide or promoting competitive entry into markets by next generation services.

So much wasted time. So many billable hours. So many angels dancing on the head of a pin. If only they were the better angels of our nature.


Postscript: If I sound angry about the endless fights over net neutrality, it’s because I am. I live in one of the highest-cost, lowest-connectivity states in the country. A state where much of the territory is covered by small rural carriers for whom the cost of just following these debates can mean delaying the replacement of an old switch, upgrading a circuit to fiber, or wiring a street. A state in which if prioritization were to be deployed it would be so that emergency services would be able to work over older infrastructure or so that someone in a rural community could remotely attend classes at the University or consult with a primary care physician (because forget high speed Internet – we have counties without doctors in them). A state in which if paid prioritization were to be developed it would be to help raise capital to build out service to communities that have never had high-speed Internet access.

So yes: the fact that we might be in for another year of rule making followed by more litigation because some firefighters signed up for the wrong wireless service plan and then were duped into believing a technological, economic, and political absurdity about net neutrality ensuring they get free Internet access does make me angry. Worse, unlike the hypothetical harms net neutrality advocates are worried about, the endless discussion of net neutrality causes real, actual, concrete harm to the people net neutrality advocates like to pat themselves on the back as advocating for. We should all be angry about this, and demanding that Congress put this debate out of our misery.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.

This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”

Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.

Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.

There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.

It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.

The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?

The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.

Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.

When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.

Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.

To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.

And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.

None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.

And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.

We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.

 

On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.

At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.

The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:

  • It has been engaging in “exclusionary conduct”  that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.  
  • Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
  • The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
  • Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.

The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:

[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.

Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court  found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.

It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.

(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)

On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.

In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):

  1. The Telecommunications Industry Association (TIA) and
  2. The Alliance for Telecommunications Industry Solutions (ATIS).

These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.

The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for  extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.”

A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.

Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.

The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.

Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.

In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.

These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.

An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”

It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”  

Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.

The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”

Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:

[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.

Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.

The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.  As discussed, the policies of ATIS and TIA are unclear on this.

In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidence on what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.

FCC Commissioner Rosenworcel penned an article this week on the doublespeak coming out of the current administration with respect to trade and telecom policy. On one hand, she argues, the administration has proclaimed 5G to be an essential part of our future commercial and defense interests. But, she tells us, the administration has, on the other hand, imposed tariffs on Chinese products that are important for the development of 5G infrastructure, thereby raising the costs of roll-out. This is a sound critique: regardless where one stands on the reasonableness of tariffs, they unquestionably raise the prices of goods on which they are placed, and raising the price of inputs to the 5G ecosystem can only slow down the pace at which 5G technology is deployed.

Unfortunately, Commissioner Rosenworcel’s fervor for advocating the need to reduce the costs of 5G deployment seems animated by the courageous act of a Democratic commissioner decrying the policies of a Republican President and is limited to a context where her voice lacks any power to actually affect policy. Even as she decries trade barriers that would incrementally increase the costs of imported communications hardware, she staunchly opposes FCC proposals that would dramatically reduce the cost of deploying next generation networks.

Given the opportunity to reduce the costs of 5G deployment by a factor far more significant than that by which tariffs will increase them, her preferred role as Democratic commissioner is that of resistance fighter. She acknowledges that “we will need 800,000 of these small cells to stay competitive in 5G” — a number significantly above the “the roughly 280,000 traditional cell towers needed to blanket the nation with 4G”.  Yet, when she has had the opportunity to join the Commission on speeding deployment, she has instead dissented. Party over policy.

In this year’s “Historical Preservation” Order, for example, the Commission voted to expedite deployment on non-Tribal lands, and to exempt small cell deployments from certain onerous review processes under both the National Historic Preservation Act and the National Environmental Policy Act of 1969. Commissioner Rosenworcel dissented from the Order, claiming that that the FCC has “long-standing duties to consult with Tribes before implementing any regulation or policy that will significantly or uniquely affect Tribal governments, their land, or their resources.” Never mind that the FCC engaged in extensive consultation with Tribal governments prior to enacting this Order.

Indeed, in adopting the Order, the Commission found that the Order did nothing to disturb deployment on Tribal lands at all, and affected only the ability of Tribal authorities to reach beyond their borders to require fees and lengthy reviews for small cells on lands in which Tribes could claim merely an “interest.”

According to the Order, the average number of Tribal authorities seeking to review wireless deployments in a given geographic area nearly doubled between 2008 and 2017. During the same period, commenters consistently noted that the fees charged by Tribal authorities for review of deployments increased dramatically.

One environmental consultant noted that fees for projects that he was involved with increased from an average of $2,000.00 in 2011 to $11,450.00 in 2017. Verizon’s fees are $2,500.00 per small cell site just for Tribal review. Of the 8,100 requests that Verizon submitted for tribal review between 2012 and 2015, just 29 ( 0.3%) resulted in a finding that there would be an adverse effect on tribal historic properties. That means that Verizon paid over $20 million to Tribal authorities over that period for historic reviews that resulted in statistically nil action. Along the same lines, Sprint’s fees are so high that it estimates that “it could construct 13,408 new sites for what 10,000 sites currently cost.”

In other words, Tribal review practices — of deployments not on Tribal land — impose a substantial tariff upon 5G deployment, increasing its cost and slowing its pace.

There is a similar story in the Commission’s adoption of, and Commissioner Rosenworcel’s partial dissent from, the recent Wireless Infrastructure Order.  Although Commissioner Rosenworcel offered many helpful suggestions (for instance, endorsing the OTARD proposal that Brent Skorup has championed) and nodded to the power of the market to solve many problems, she also dissented on central parts of the Order. Her dissent shows an unfortunate concern for provincial, political interests and places those interests above the Commission’s mission of ensuring timely deployment of advanced wireless communication capabilities to all Americans.

Commissioner Rosenworcel’s concern about the Wireless Infrastructure Order is that it would prevent state and local governments from imposing fees sufficient to recover costs incurred by the government to support wireless deployments by private enterprise, or from imposing aesthetic requirements on those deployments. Stated this way, her objections seem almost reasonable: surely local government should be able to recover the costs they incur in facilitating private enterprise; and surely local government has an interest in ensuring that private actors respect the aesthetic interests of the communities in which they build infrastructure.

The problem for Commissioner Rosenworcel is that the Order explicitly takes these concerns into account:

[W]e provide guidance on whether and in what circumstances aesthetic requirements violate the Act. This will help localities develop and implement lawful rules, enable providers to comply with these requirements, and facilitate the resolution of disputes. We conclude that aesthetics requirements are not preempted if they are (1) reasonable, (2) no more burdensome than those applied to other types of infrastructure deployments, and (3) objective and published in advance

It neither prohibits localities from recovering costs nor imposing aesthetic requirements. Rather, it requires merely that those costs and requirements be reasonable. The purpose of the Order isn’t to restrict localities from engaging in reasonable conduct; it is to prohibit them from engaging in unreasonable, costly conduct, while providing guidance as to what cost recovery and aesthetic considerations are reasonable (and therefore permissible).

The reality is that localities have a long history of using cost recovery — and especially “soft” or subjective requirements such as aesthetics — to extract significant rents from communications providers. In the 1980s this slowed the deployment and increased the costs of cable television. In the 2000s this slowed the deployment and increase the cost of of fiber-based Internet service. Today this is slowing the deployment and increasing the costs of advanced wireless services. And like any tax — or tariff — the cost is ultimately borne by consumers.

Although we are broadly sympathetic to arguments about local control (and other 10th Amendment-related concerns), the FCC’s goal in the Wireless Infrastructure Order was not to trample upon the autonomy of small municipalities; it was to implement a reasonably predictable permitting process that would facilitate 5G deployment. Those affected would not be the small, local towns attempting to maintain a desirable aesthetic for their downtowns, but large and politically powerful cities like New York City, where the fees per small cell site can be more than $5,000.00 per installation. Such extortionate fees are effectively a tax on smartphone users and others who will utilize 5G for communications. According to the Order, it is estimated that capping these fees would stimulate over $2.4 billion in additional infrastructure buildout, with widespread benefits to consumers and the economy.

Meanwhile, Commissioner Rosenworcel cries “overreach!” “I do not believe the law permits Washington to run roughshod over state and local authority like this,” she said. Her federalist bent is welcome — or it would be, if it weren’t in such stark contrast to her anti-federalist preference for preempting states from establishing rules governing their own internal political institutions when it suits her preferred political objective. We are referring, of course, to Rosenworcel’s support for the previous administration’s FCC’s decision to preempt state laws prohibiting the extension of municipal governments’ broadband systems. The order doing so was plainly illegal from the moment it was passed, as every court that has looked at it has held. That she was ok with. But imposing reasonable federal limits on states’ and localities’ ability to extract political rents by abusing their franchising process is apparently beyond the pale.

Commissioner Rosenworcel is right that the FCC should try to promote market solutions like Brent’s OTARD proposal. And she is also correct in opposing dangerous and destructive tariffs that will increase the cost of telecommunications equipment. Unfortunately, she gets it dead wrong when she supports a stifling regulatory status quo that will surely make it unduly difficult and expensive to deploy next generation networks — not least for those most in need of them. As Chairman Pai noted in his Statement on the Order: “When you raise the cost of deploying wireless infrastructure, it is those who live in areas where the investment case is the most marginal — rural areas or lower-income urban areas — who are most at risk of losing out.”

Reconciling those two positions entails nothing more than pointing to the time-honored Washington tradition of Politics Over Policy. The point is not (entirely) to call out Commissioner Rosenworcel; she’s far from the only person in Washington to make this kind of crass political calculation. In fact, she’s far from the only FCC Commissioner ever to have done so.

One need look no further than the previous FCC Chairman, Tom Wheeler, to see the hypocritical politics of telecommunications policy in action. (And one need look no further than Tom Hazlett’s masterful book, The Political Spectrum: The Tumultuous Liberation of Wireless Technology, from Herbert Hoover to the Smartphone to find a catalogue of its long, sordid history).

Indeed, Larry Downes has characterized Wheeler’s reign at the FCC (following a lengthy recounting of all its misadventures) as having left the agency “more partisan than ever”:

The lesson of the spectrum auctions—one right, one wrong, one hanging in the balance—is the lesson writ large for Tom Wheeler’s tenure at the helm of the FCC. While repeating, with decreasing credibility, that his lodestone as Chairman was simply to encourage “competition, competition, completion” and let market forces do the agency’s work for it, the reality, as these examples demonstrate, has been something quite different.

The Wheeler FCC has instead been driven by a dangerous combination of traditional rent-seeking behavior by favored industry clients, potent pressure from radical advocacy groups and their friends in the White House, and a sincere if misguided desire by Wheeler to father the next generation of network technologies, which quickly mutated from sound policy to empty populism even as technology continued on its own unpredictable path.

* * *

And the Chairman’s increasingly autocratic management style has left the agency more political and more partisan than ever, quick to abandon policies based on sound legal, economic and engineering principles in favor of bait-and-switch proceedings almost certain to do more harm than good, if only unintentionally.

The great irony is that, while Commissioner Rosenworcel’s complaints are backed by a legitimate concern that the Commission has waited far too long to take action on spectrum issues, the criticism should properly fall not upon the current Chair, but — you guessed it — his predecessor, Chairman Wheeler (and his predecessor, Julius Genachowski). Of course, in true partisan fashion, Rosenworcel was fawning in her praise for her political ally’s spectrum agenda, lauding it on more than one occasion as going “to infinity and beyond!”

Meanwhile, Rosenworcel has taken virtually every opportunity to chide and castigate Chairman Pai’s efforts to get more spectrum into the marketplace, most often criticizing them as too little, too slow, and too late. Yet from any objective perspective, the current FCC has been addressing spectrum issues at a breakneck pace, as fast, or faster than any prior Commission. As with spectrum, there is an upper limit to the speed at which federal bureaucracy can work, and Chairman Pai has kept the Commission pushed right up against that limit.

It’s a shame Commissioner Rosenworcel prefers to blame Chairman Pai for the problems she had a hand in creating, and President Trump for problems she has no ability to correct. It’s even more a shame that, having an opportunity to address the problems she so often decries — by working to get more spectrum deployed and put into service more quickly and at lower cost to industry and consumers alike — she prefers to dutifully wear the hat of resistance, instead.

But that’s just politics, we suppose. And like any tariff, it makes us all poorer.

As has been rumored in the press for a few weeks, today Comcast announced it is considering making a renewed bid for a large chunk of Twenty-First Century Fox’s (Fox) assets. Fox is in the process of a significant reorganization, entailing primarily the sale of its international and non-television assets. Fox itself will continue, but with a focus on its US television business.

In December of last year, Fox agreed to sell these assets to Disney, in the process rejecting a bid from Comcast. Comcast’s initial bid was some 16% higher than Disney’s, although there were other differences in the proposed deals, as well.

In April of this year, Disney and Fox filed a proxy statement with the SEC explaining the basis for the board’s decision, including predominantly the assertion that the Comcast bid (NB: Comcast is identified as “Party B” in that document) presented greater regulatory (antitrust) risk.

As noted, today Comcast announced it is in “advanced stages” of preparing another unsolicited bid. This time,

Any offer for Fox would be all-cash and at a premium to the value of the current all-share offer from Disney. The structure and terms of any offer by Comcast, including with respect to both the spin-off of “New Fox” and the regulatory risk provisions and the related termination fee, would be at least as favorable to Fox shareholders as the Disney offer.

Because, as we now know (since the April proxy filing), Fox’s board rejected Comcast’s earlier offer largely on the basis of the board’s assessment of the antitrust risk it presented, and because that risk assessment (and the difference between an all-cash and all-share offer) would now be the primary distinguishing feature between Comcast’s and Disney’s bids, it is worth evaluating that conclusion as Fox and its shareholders consider Comcast’s new bid.

In short: There is no basis for ascribing a greater antitrust risk to Comcast’s purchase of Fox’s assets than to Disney’s.

Summary of the Proposed Deal

Post-merger, Fox will continue to own Fox News Channel, Fox Business Network, Fox Broadcasting Company, Fox Sports, Fox Television Stations Group, and sports cable networks FS1, FS2, Fox Deportes, and Big Ten Network.

The deal would transfer to Comcast (or Disney) the following:

  • Primarily, international assets, including Fox International (cable channels in Latin America, the EU, and Asia), Star India (the largest cable and broadcast network in India), and Fox’s 39% interest in Sky (Europe’s largest pay TV service).
  • Fox’s film properties, including 20th Century Fox, Fox Searchlight, and Fox Animation. These would bring along with them studios in Sydney and Los Angeles, but would not include the Fox Los Angeles backlot. Like the rest of the US film industry, the majority of Fox’s film revenue is earned overseas.
  • FX cable channels, National Geographic cable channels (of which Fox currently owns 75%), and twenty-two regional sports networks (RSNs). In terms of relative demand for the two cable networks, FX is a popular basic cable channel, but fairly far down the list of most-watched channels, while National Geographic doesn’t even crack the top 50. Among the RSNs, only one geographic overlap exists with Comcast’s current RSNs, and most of the Fox RSNs (at least 14 of the 22) are not in areas where Comcast has a substantial service presence.
  • The deal would also entail a shift in the companies’ ownership interests in Hulu. Hulu is currently owned in equal 30% shares by Disney, Comcast, and Fox, with the remaining, non-voting 10% owned by Time Warner. Either Comcast or Disney would hold a controlling 60% share of Hulu following the deal with Fox.

Analysis of the Antitrust Risk of a Comcast/Fox Merger

According to the joint proxy statement, Fox’s board discounted Comcast’s original $34.36/share offer — but not the $28.00/share offer from Disney — because of “the level of regulatory issues posed and the proposed risk allocation arrangements.” Significantly on this basis, the Fox board determined Disney’s offer to be superior.

The claim that a merger with Comcast poses sufficiently greater antitrust risk than a purchase by Disney to warrant its rejection out of hand is unsupportable, however. From an antitrust perspective, it is even plausible that a Comcast acquisition of the Fox assets would be on more-solid ground than would be a Disney acquisition.

Vertical Mergers Generally Present Less Antitrust Risk

A merger between Comcast and Fox would be predominantly vertical, while a merger between Disney and Fox, in contrast, would be primarily horizontal. Generally speaking, it is easier to get antitrust approval for vertical mergers than it is for horizontal mergers. As Bruce Hoffman, Director of the FTC’s Bureau of Competition, noted earlier this year:

[V]ertical merger enforcement is still a small part of our merger workload….

There is a strong theoretical basis for horizontal enforcement because economic models predict at least nominal potential for anticompetitive effects due to elimination of horizontal competition between substitutes.

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

On its face, and consistent with the last quarter century of merger enforcement by the DOJ and FTC, the Comcast acquisition would be less likely to trigger antitrust scrutiny, and the Disney acquisition raises more straightforward antitrust issues.

This is true even in light of the fact that the DOJ decided to challenge the AT&T-Time Warner (AT&T/TWX) merger.

The AT&T/TWX merger is a single data point in a long history of successful vertical mergers that attracted little scrutiny, and no litigation, by antitrust enforcers (although several have been approved subject to consent orders).

Just because the DOJ challenged that one merger does not mean that antitrust enforcers generally, nor even the DOJ in particular, have suddenly become more hostile to vertical mergers.

Of particular importance to the conclusion that the AT&T/TWX merger challenge is of minimal relevance to predicting the DOJ’s reception in this case, the theory of harm argued by the DOJ in that case is far from well-accepted, while the potential theory that could underpin a challenge to a Disney/Fox merger is. As Bruce Hoffman further remarks:

I am skeptical of arguments that vertical mergers cause harm due to an increased bargaining skill; this is likely not an anticompetitive effect because it does not flow from a reduction in competition. I would contrast that to the elimination of competition in a horizontal merger that leads to an increase in bargaining leverage that could raise price or reduce output.

The Relatively Lower Risk of a Vertical Merger Challenge Hasn’t Changed Following the DOJ’s AT&T/Time Warner Challenge

Judge Leon is expected to rule on the AT&T/TWX merger in a matter of weeks. The theory underpinning the DOJ’s challenge is problematic (to say the least), and the case it presented was decidedly weak. But no litigated legal outcome is ever certain, and the court could, of course, rule against the merger nevertheless.

Yet even if the court does rule against the AT&T/TWX merger, this hardly suggests that a Comcast/Fox deal would create a greater antitrust risk than would a Disney/Fox merger.

A single successful challenge to a vertical merger — what would be, in fact, the first successful vertical merger challenge in four decades — doesn’t mean that the courts are becoming hostile to vertical mergers any more than the DOJ’s challenge means that vertical mergers suddenly entail heightened enforcement risk. Rather, it would simply mean that that, given the specific facts of the case, the DOJ was able to make out its prima facie case, and that the defendants were unable to rebut it.  

A ruling for the DOJ in the AT&T/TWX merger challenge would be rooted in a highly fact-specific analysis that could have no direct bearing on future cases.

In the AT&T/TWX case, the court’s decision will turn on its assessment of the DOJ’s argument that the merged firm could raise subscriber prices by a few pennies per subscriber. But as AT&T’s attorney aptly pointed out at trial (echoing the testimony of AT&T’s economist, Dennis Carlton):

The government’s modeled price increase is so negligible that, given the inherent uncertainty in that predictive exercise, it is not meaningfully distinguishable from zero.

Even minor deviations from the facts or the assumptions used in the AT&T/TWX case could completely upend the analysis — and there are important differences between the AT&T/TWX merger and a Comcast/Fox merger. True, both would be largely vertical mergers that would bring together programming and distribution assets in the home video market. But the foreclosure effects touted by the DOJ in the AT&T/TWX merger are seemingly either substantially smaller or entirely non-existent in the proposed Comcast/Fox merger.

Most importantly, the content at issue in AT&T/TWX is at least arguably (and, in fact, argued by the DOJ) “must have” programming — Time Warner’s premium HBO channels and its CNN news programming, in particular, were central to the DOJ’s foreclosure argument. By contrast, the programming that Comcast would pick up as a result of the proposed merger with Fox — FX (a popular, but non-essential, basic cable channel) and National Geographic channels (which attract a tiny fraction of cable viewing) — would be extremely unlikely to merit that designation.

Moreover, the DOJ made much of the fact that AT&T, through DirectTV, has a national distribution footprint. As a result, its analysis was dependent upon the company’s potential ability to attract new subscribers decamping from competing providers from whom it withholds access to Time Warner content in every market in the country. Comcast, on the other hand, provides cable service in only about 35% of the country. This significantly limits its ability to credibly threaten competitors because its ability to recoup lost licensing fees by picking up new subscribers is so much more limited.

And while some RSNs may offer some highly prized live sports programming, the mismatch between Comcast’s footprint and the FOX RSNs (only about 8 of the 22 Fox RSNs are in Comcast service areas) severely limits any ability or incentive the company would have to leverage that content for higher fees. Again, to the extent that RSN programming is not “must-have,” and to the extent there is not overlap between the RSN’s geographic area and Comcast’s service area, the situation is manifestly not the same as the one at issue in the AT&T/TWX merger.

In sum, a ruling in favor of the DOJ in the AT&T/TWX case would be far from decisive in predicting how the agency and the courts would assess any potential concerns arising from Comcast’s ownership of Fox’s assets.

A Comcast/Fox Deal May Entail Lower Antitrust Risk than a Disney/Fox Merger

As discussed below, concerns about antitrust enforcement risk from a Comcast/Fox merger are likely overstated. Perhaps more importantly, however, to the extent these concerns are legitimate, they apply at least as much to a Disney/Fox merger. There is, at minimum, no basis for assuming a Comcast deal would present any greater regulatory risk.

The Antitrust Risk of a Comcast/Fox Merger Is Likely Overstated

The primary theory upon which antitrust enforcers could conceivably base a Comcast/Fox merger challenge would be a vertical foreclosure theory. Importantly, such a challenge would have to be based on the incremental effect of adding the Fox assets to Comcast, and not on the basis of its existing assets. Thus, for example, antitrust enforcers would not be able to base a merger challenge on the possibility that Comcast could leverage NBC content it currently owns to extract higher fees from competitors. Rather, only if the combination of NBC programming with additional content from Fox could create a new antitrust risk would a case be tenable.

Enforcers would be unlikely to view the addition of FX and National Geographic to the portfolio of programming content Comcast currently owns as sufficient to raise concerns that the merger would give Comcast anticompetitive bargaining power or the ability to foreclose access to its content.

Although even less likely, enforcers could be concerned with the (horizontal) addition of 20th Century Fox filmed entertainment to Universal’s existing film production and distribution. But the theatrical film market is undeniably competitive, with the largest studio by revenue (Disney) last year holding only 22% of the market. The combination of 20th Century Fox with Universal would still result in a market share only around 25% based on 2017 revenues (and, depending on the year, not even result in the industry’s largest share).

There is also little reason to think that a Comcast controlling interest in Hulu would attract problematic antitrust attention. Comcast has already demonstrated an interest in diversifying its revenue across cable subscriptions and licensing, broadband subscriptions, and licensing to OVDs, as evidenced by its recent deal to offer Netflix as part of its Xfinity packages. Hulu likely presents just one more avenue for pursuing this same diversification strategy. And Universal has a history (see, e.g., this, this, and this) of very broad licensing across cable providers, cable networks, OVDs, and the like.

In the case of Hulu, moreover, the fact that Comcast is vertically integrated in broadband as well as cable service likely reduces the anticompetitive risk because more-attractive OVD content has the potential to increase demand for Comcast’s broadband service. Broadband offers larger margins (and is growing more rapidly) than cable, and it’s quite possible that any loss in Comcast’s cable subscriber revenue from Hulu’s success would be more than offset by gains in its content licensing and broadband subscription revenue. The same, of course, goes for Comcast’s incentives to license content to OVD competitors like Netflix: Comcast plausibly gains broadband subscription revenue from heightened consumer demand for Netflix, and this at least partially offsets any possible harm to Hulu from Netflix’s success.

At the same time, especially relative to Netflix’s vast library of original programming (an expected $8 billion worth in 2018 alone) and content licensed from other sources, the additional content Comcast would gain from a merger with Fox is not likely to appreciably increase its bargaining leverage or its ability to foreclose Netflix’s access to its content.     

Finally, Comcast’s ownership of Fox’s RSNs could, as noted, raise antitrust enforcers’ eyebrows. Enforcers could be concerned that Comcast would condition competitors’ access to RSN programming on higher licensing fees or prioritization of its NBC Sports channels.

While this is indeed a potential risk, it is hardly a foregone conclusion that it would draw an enforcement action. Among other things, NBC is far from the market leader, and improving its competitive position relative to ESPN could be viewed as a benefit of the deal. In any case, potential problems arising from ownership of the RSNs could easily be dealt with through divestiture or behavioral conditions; they are extremely unlikely to lead to an outright merger challenge.

The Antitrust Risk of a Disney Deal May Be Greater than Expected

While a Comcast/Fox deal doesn’t entail no antitrust enforcement risk, it certainly doesn’t entail sufficient risk to deem the deal dead on arrival. Moreover, it may entail less antitrust enforcement risk than would a Disney/Fox tie-up.

Yet, curiously, the joint proxy statement doesn’t mention any antitrust risk from the Disney deal at all and seems to suggest that the Fox board applied no risk discount in evaluating Disney’s bid.

Disney — already the market leader in the filmed entertainment industry — would acquire an even larger share of box office proceeds (and associated licensing revenues) through acquisition of Fox’s film properties. Perhaps even more important, the deal would bring the movie rights to almost all of the Marvel Universe within Disney’s ambit.

While, as suggested above, even that combination probably wouldn’t trigger any sort of market power presumption, it would certainly create an entity with a larger share of the market and stronger control of the industry’s most valuable franchises than would a Comcast/Fox deal.

Another relatively larger complication for a Disney/Fox merger arises from the prospect of combining Fox’s RSNs with ESPN. Whatever ability or incentive either company would have to engage in anticompetitive conduct surrounding sports programming, that risk would seem to be more significant for undisputed market leader, Disney. At the same time, although still powerful, demand for ESPN on cable has been flagging. Disney could well see the ability to bundle ESPN with regional sports content as a way to prop up subscription revenues for ESPN — a practice, in fact, that it has employed successfully in the past.   

Finally, it must be noted that licensing of consumer products is an even bigger driver of revenue from filmed entertainment than is theatrical release. No other company comes close to Disney in this space.

Disney is the world’s largest licensor, earning almost $57 billion in 2016 from licensing properties like Star Wars and Marvel Comics. Universal is in a distant 7th place, with 2016 licensing revenue of about $6 billion. Adding Fox’s (admittedly relatively small) licensing business would enhance Disney’s substantial lead (even the number two global licensor, Meredith, earned less than half of Disney’s licensing revenue in 2016). Again, this is unlikely to be a significant concern for antitrust enforcers, but it is notable that, to the extent it might be an issue, it is one that applies to Disney and not Comcast.

Conclusion

Although I hope to address these issues in greater detail in the future, for now the preliminary assessment is clear: There is no legitimate basis for ascribing a greater antitrust risk to a Comcast/Fox deal than to a Disney/Fox deal.

At this point, only the most masochistic and cynical among DC’s policy elite actually desire for the net neutrality conflict to continue. And yet, despite claims that net neutrality principles are critical to protecting consumers, passage of the current Congressional Review Act (“CRA”) disapproval resolution in Congress would undermine consumer protection and promise only to drag out the fight even longer.

The CRA resolution is primarily intended to roll back the FCC’s re-re-classification of broadband as a Title I service under the Communications Act in the Restoring Internet Freedom Order (“RIFO”). The CRA allows Congress to vote to repeal rules recently adopted by federal agencies; upon a successful CRA vote, the rules are rescinded and the agency is prohibited from adopting substantially similar rules in the future.

But, as TechFreedom has noted, it’s not completely clear that a CRA on a regulatory classification decision will work quite the way Congress intends it and could just trigger more litigation cycles, largely because it is unclear what parts of the RIFO are actually “rules” subject to the CRA. Harold Feld has written a critique of TechFreedom’s position, arguing, in effect, that of course the RIFO is a rule; TechFreedom responded with a pretty devastating rejoinder.

But this exchange really demonstrates TechFreedom’s central argument: It is sufficiently unclear how or whether the CRA will apply to the various provisions of the RIFO, such that the only things the CRA is guaranteed to do are 1) to strip consumers of certain important protections — it would take away the FCC’s transparency requirements for ISPs, and imperil privacy protections currently ensured by the FTC — while 2) prolonging the already interminable litigation and political back-and-forth over net neutrality.

The CRA is political theater

The CRA resolution effort is not about good Internet regulatory policy; rather, it’s pure political opportunism ahead of the midterms. Democrats have recognized net neutrality as a good wedge issue because of its low political opportunity cost. The highest-impact costs of over-regulating broadband through classification decisions are hard to see: Rather than bad things happening, the costs arrive in the form of good things not happening. Eventually those costs work their way to customers through higher access prices or less service — especially in rural areas most in need of it — but even these effects take time to show up and, when they do, are difficult to pin on any particular net neutrality decision, including the CRA resolution. Thus, measured in electoral time scales, prolonging net neutrality as a painful political issue — even though actual resolution of the process by legislation would be the sensible course — offers tremendous upside for political challengers and little cost.  

The truth is, there is widespread agreement that net neutrality issues need to be addressed by Congress: A constant back and forth between the FCC (and across its own administrations) and the courts runs counter to the interests of consumers, broadband companies, and edge providers alike. Virtually whatever that legislative solution ends up looking like, it would be an improvement over the unstable status quo.

There have been various proposals from Republicans and Democrats — many of which contain provisions that are likely bad ideas — but in the end, a bill passed with bipartisan input should have the virtue of capturing an open public debate on the issue. Legislation won’t be perfect, but it will be tremendously better than the advocacy playground that net neutrality has become.

What would the CRA accomplish?

Regardless of what one thinks of the substantive merits of TechFreedom’s arguments on the CRA and the arcana of legislative language distinguishing between agency “rules” and “orders,” if the CRA resolution is successful (a prospect that is a bit more likely following the Senate vote to pass it) what follows is pretty clear.

The only certain result of the the CRA resolution becoming law would be to void the transparency provisions that the FCC introduced in the RIFO — the one part of the Order that is pretty clearly a “rule” subject to CRA review — and it would disable the FCC from offering another transparency rule in its place. Everything else is going to end up — surprise! — before the courts, which would serve only to keep the issues surrounding net neutrality unsettled for another several years. (A cynic might suggest that this is, in fact, the goal of net neutrality proponents, for whom net neutrality has been and continues to have important political valence.)

And if the CRA resolution withstands the inevitable legal challenge to its rescision of the rest of the RIFO, it would also (once again) remove broadband privacy from the FTC’s purview, placing it back into the FCC’s lap — which is already prohibited from adopting privacy rules following last year’s successful CRA resolution undoing the Wheeler FCC’s broadband privacy regulations. The result is that we could be left without any broadband privacy regulator at all — presumably not the outcome strong net neutrality proponents want — but they persevere nonetheless.

Moreover, TechFreedom’s argument that the CRA may not apply to all parts of the RIFO could have a major effect on whether or not Congress is even accomplishing anything at all (other than scoring political points) with this vote. It could be the case that the CRA applies only to “rules” and not “orders,” or it could be the case that even if the CRA does apply to the RIFO, its passage would not force the FCC to revive the abrogated 2015 Open Internet Order, as proponents of the CRA vote hope.

Whatever one thinks of these arguments, however, they are based on a sound reading of the law and present substantial enough questions to sustain lengthy court challenges. Thus, far from a CRA vote actually putting to rest the net neutrality issue, it is likely to spawn litigation that will drag out the classification uncertainty question for at least another year (and probably more, with appeals).

Stop playing net neutrality games — they aren’t fun

Congress needs to stop trying to score easy political points on this issue while avoiding the hard and divisive work of reaching a compromise on actual net neutrality legislation. Despite how the CRA is presented in the popular media, a CRA vote is the furthest thing from a simple vote for net neutrality: It’s a political calculation to avoid accountability.

I had the pleasure last month of hosting the first of a new annual roundtable discussion series on closing the rural digital divide through the University of Nebraska’s Space, Cyber, and Telecom Law Program. The purpose of the roundtable was to convene a diverse group of stakeholders — from farmers to federal regulators; from small municipal ISPs to billion dollar app developers — for a discussion of the on-the-ground reality of closing the rural digital divide.

The impetus behind the roundtable was, quite simply, that in my five years living in Nebraska I have consistently found that the discussions that we have here about the digital divide in rural America are wholly unlike those that the federally-focused policy crowd has back in DC. Every conversation I have with rural stakeholders further reinforces my belief that those of us who approach the rural digital divide from the “DC perspective” fail to appreciate the challenges that rural America faces or the drive, innovation, and resourcefulness that rural stakeholders bring to the issue when DC isn’t looking. So I wanted to bring these disparate groups together to see what was driving this disconnect, and what to do about it.

The unfortunate reality of the rural digital divide is that it is an existential concern for much of America. At the same time, the positive news is that closing this divide has become an all-hands-on-deck effort for stakeholders in rural America, one that defies caricatured political, technological, and industry divides. I have never seen as much agreement and goodwill among stakeholders in any telecom community as when I speak to rural stakeholders about digital divides. I am far from an expert in rural broadband issues — and I don’t mean to hold myself out as one — but as I have engaged with those who are, I am increasingly convinced that there are far more and far better ideas about closing the rural digital divide to be found outside the beltway than within.

The practical reality is that most policy discussions about the rural digital divide over the past decade have been largely irrelevant to the realities on the ground: The legal and policy frameworks focus on the wrong things, and participants in these discussions at the federal level rarely understand the challenges that define the rural divide. As a result, stakeholders almost always fall back on advocating stale, entrenched, viewpoints that have little relevance to the on-the-ground needs. (To their credit, both Chairman Pai and Commissioner Carr have demonstrated a longstanding interest in understanding the rural digital divide — an interest that is recognized and appreciated by almost every rural stakeholder I speak to.)

Framing Things Wrong

It is important to begin by recognizing that contemporary discussion about the digital divide is framed in terms of, and addressed alongside, longstanding federal Universal Service policy. This policy, which has its roots in the 20th century project of ensuring that all Americans had access to basic telephone service, is enshrined in the first words of the Communications Act of 1934. It has not significantly evolved from its origins in the analog telephone system — and that’s a problem.

A brief history of Universal Service

The Communications Act established the FCC

for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States … a rapid, efficient, Nation-wide, and world-wide wire and radio communication service ….

The historic goal of “universal service” has been to ensure that anyone in the country is able to connect to the public switched telephone network. In the telephone age, that network provided only one primary last-mile service: transmitting basic voice communications from the customer’s telephone to the carrier’s switch. Once at the switch various other services could be offered — but providing them didn’t require more than a basic analog voice circuit to the customer’s home.

For most of the 20th century, this form of universal service was ensured by fiat and cost recovery. Regulated telephone carriers (that is, primarily, the Bell operating companies under the umbrella of AT&T) were required by the FCC to provide service to all comers, at published rates, no matter the cost of providing that service. In exchange, the carriers were allowed to recover the cost of providing service to high-cost areas through the regulated rates charged to all customers. That is, the cost of ensuring universal service was spread across and subsidized by the entire rate base.

This system fell apart following the break-up of AT&T in the 1980s. The separation of long distance from local exchange service meant that the main form of cross subsidy — from long distance to local callers — could no longer be handled implicitly. Moreover, as competitive exchange services began entering the market, they tended to compete first, and most, over the high-revenue customers who had supported the rate base. To accommodate these changes, the FCC transitioned from a model of implicit cross-subsidies to one of explicit cross-subsidies, introducing long distance access charges and termination fees that were regulated to ensure money continued to flow to support local exchange carriers’ costs of providing services to high-cost users.

The 1996 Telecom Act forced even more dramatic change. The goal of the 1996 Telecom Act was to introduce competition throughout the telecom ecosystem — but the traditional cross-subsidy model doesn’t work in a competitive market. So the 1996 Telecom Act further evolved the FCC’s universal service mechanism, establishing the Universal Service Fund (USF), funded by fees charged to all telecommunications carriers, which would be apportioned to cover the costs incurred by eligible telecommunications carriers in providing high-cost (and other “universal”) services.

The problematic framing of Universal Service

For present purposes, we need not delve into these mechanisms. Rather, the very point of this post is that the interminable debates about these mechanisms — who pays into the USF and how much; who gets paid out of the fund and how much; and what services and technologies the fund covers — simply don’t match the policy challenges of closing the digital divide.

What the 1996 Telecom Act does offer is a statement of the purposes of Universal Service. In 47 USC 254(b)(3), the Act states the purpose of ensuring “Access in rural and high cost areas”:

Consumers in all regions of the Nation, including low-income consumers and those in rural, insular, and high cost areas, should have access to telecommunications and information services … that are reasonably comparable to those services provided in urban areas ….

This is a problematic framing. (I would actually call it patently offensive…). It is a framing that made sense in the telephone era, when ensuring last-mile service meant providing only basic voice telephone service. In that era, having any service meant having all service, and the primary obstacles to overcome were the high-cost of service to remote areas and the lower revenues expected from lower-income areas. But its implicit suggestion is that the goal of federal policy should be to make rural America look like urban America.

Today universal service, at least from the perspective of closing the digital divide, means something different, however. The technological needs of rural America are different than those of urban America; the technological needs of poor and lower-income America are different than those of rich America. Framing the goal in terms of making sure rural and lower-income America have access to the same services as urban and wealthy America is, by definition, not responsive to (or respectful of) the needs of those who are on the wrong side of one of this country’s many digital divides. Indeed, that goal almost certainly distracts from and misallocates resources that could be better leveraged towards closing these divides.

The Demands of Rural Broadband

Rural broadband needs are simultaneously both more and less demanding than the services we typically focus on when discussing universal service. The services that we fund, and the way that we approach how to close digital divides, needs to be based in the first instance on the actual needs of the community that connectivity is meant to serve. Take just two of the prototypical examples: precision and automated farming, and telemedicine.

Assessing rural broadband needs

Precision agriculture requires different networks than does watching Netflix, web surfing, or playing video games. Farms with hundreds or thousands of sensors and other devices per acre can put significant load on networks — but not in terms of bandwidth. The load is instead measured in terms of packets and connections per second. Provisioning networks to handle lots of small packets is very different from provisioning them to handle other, more-typical (to the DC crowd), use cases.

On the other end of the agricultural spectrum, many farms don’t own their own combines. Combines cost upwards of a million dollars. One modern combine is sufficient to tend to several hundred acres in a given farming season. It is common for many farmers to hire someone who owns a combine to service their fields. During harvest season, for instance, one combine service may operate on a dozen farms during harvest season. Prior to operation, modern precision systems need to download a great deal of GIS, mapping, weather, crop, and other data. High-speed Internet can literally mean the difference between letting a combine sit idle for many days of a harvest season while it downloads data and servicing enough fields to cover the debt payments on a million dollar piece of equipment.

Going to the other extreme, rural health care relies upon Internet connectivity — but not in the ways it is usually discussed. The stories one hears on the ground aren’t about the need for particularly high-speed connections or specialized low-latency connections to allow remote doctors to control surgical robots. While tele-surgery and access to highly specialized doctors are important applications of telemedicine, the urgent needs today are far more modest: simple video consultations with primary care physicians for routine care, requiring only a moderate-speed Internet connection capable of basic video conferencing. In reality, literally megabits per second (not even 10 mbps) can mean the difference between a remote primary care physician being able to provide basic health services to a rural community and that community going entirely unserved by a doctor.

Efforts to run gigabit connections and dedicated fiber to rural health care facilities may be a great long-term vision — but the on-the-ground need could be served by a reliable 4G wireless connection or DSL line. (Again, to their credit, this is a point that Chairman Pai and Commissioner Carr have been highlighting in their recent travels through rural parts of the country.)

Of course, rural America faces many of the same digital divides faced elsewhere. Even in the wealthiest cities in Nebraska, for instance, significant numbers of students are eligible for free or reduced price school lunches — a metric that corresponds with income — and rely on anchor institutions for Internet access. The problem is worse in much of rural Nebraska, where there may simply be no Internet access at all.

Addressing rural broadband needs

Two things in particular have struck me as I have spoken to rural stakeholders about the digital divide. The first is that this is an “all hands on deck” problem. Everyone I speak to understands the importance of the issue. Everyone is willing to work with and learn from others. Everyone is willing to commit resources and capital to improve upon the status quo, including by undertaking experiments and incurring risks.

The discussions I have in DC, however, including with and among key participants in the DC policy firmament, are fundamentally different. These discussions focus on tweaking contribution factors and cost models to protect or secure revenues; they are, in short, missing the forest for the trees. Meanwhile, the discussion on the ground focuses on how to actually deploy service and overcome obstacles. No amount of cost-model tweaking will do much at all to accomplish either of these.

The second striking, and rather counterintuitive, thing that I have often heard is that closing the rural digital divide isn’t (just) about money. I’ve heard several times the lament that we need to stop throwing more money at the problem and start thinking about where the money we already have needs to go. Another version of this is that it isn’t about the money, it’s about the business case. Money can influence a decision whether to execute upon a project for which there is a business case — but it rarely creates a business case where there isn’t one. And where it has created a business case, that case was often for building out relatively unimportant networks while increasing the opportunity costs of building out more important networks. The networks we need to build are different from those envisioned by the 1996 Telecom Act or FCC efforts to contort that Act to fund Internet build-out.

Rural Broadband Investment

There is, in fact, a third particularly striking thing I have gleaned from speaking with rural stakeholders, and rural providers in particular: They don’t really care about net neutrality, and don’t see it as helpful to closing the digital divide.  

Rural providers, it must be noted, are generally “pro net neutrality,” in the sense that they don’t think that ISPs should interfere with traffic going over their networks; in the sense that they don’t have any plans themselves to engage in “non-neutral” conduct; and also in the sense that they don’t see a business case for such conduct.

But they are also wary of Title II regulation, or of other rules that are potentially burdensome or that introduce uncertainty into their business. They are particularly concerned that Title II regulation opens the door to — and thus creates significant uncertainty about the possibility of — other forms of significant federal regulation of their businesses.

More than anything else, they want to stop thinking, talking, and worrying about net neutrality regulations. Ultimately, the past decade of fights about net neutrality has meant little other than regulatory cost and uncertainty for them, which makes planning and investment difficult — hardly a boon to closing the digital divide.

The basic theory of the Wheeler-era FCC’s net neutrality regulations was the virtuous cycle — that net neutrality rules gave edge providers the certainty they needed in order to invest in developing new applications that, in turn, would drive demand for, and thus buildout of, new networks. But carriers need certainty, too, if they are going to invest capital in building these networks. Rural ISPs are looking for the business case to justify new builds. Increasing uncertainty has only negative effects on the business case for closing the rural digital divide.

Most crucially, the logic of the virtuous cycle is virtually irrelevant to driving demand for closing the digital divide. Edge innovation isn’t going to create so much more value that users will suddenly demand that networks be built; rather, the applications justifying this demand already exist, and most have existed for many years. What stands in the way of the build-out required to service under- or un-served rural areas is the business case for building these (expensive) networks. And the uncertainty and cost associated with net neutrality only exacerbate this problem.

Indeed, rural markets are an area where the virtuous cycle very likely turns in the other direction. Rural communities are actually hotbeds of innovation. And they know their needs far better than Silicon Valley edge companies, so they are likely to build apps and services that better cater to the unique needs of rural America. But these apps and services aren’t going to be built unless their developers have access to the broadband connections needed to build and maintain them, and, most important of all, unless users have access to the broadband connections needed to actually make use of them. The upshot is that, in rural markets, connectivity precedes and drives the supply of edge services not, as the Wheeler-era virtuous cycle would have it, the other way around.

The effect of Washington’s obsession with net neutrality these past many years has been to increase uncertainty and reduce the business case for building new networks. And its detrimental effects continue today with politicized and showboating efforts to to invoke the Congressional Review Act in order to make a political display of the 2017 Restoring Internet Freedom Order. Back in the real world, however, none of this helps to provide rural communities with the type of broadband services they actually need, and the effect is only to worsen the rural digital divide, both politically and technologically.

The Road Ahead …?

The story told above is not a happy one. Closing digital divides, and especially closing the rural digital divide, is one of the most important legal, social, and policy challenges this country faces. Yet the discussion about these issues in DC reflects little of the on-the-ground reality. Rather advocates in DC attack a strawman of the rural digital divide, using it as a foil to protect and advocate for their pet agendas. If anything, the discussion in DC distracts attention and diverts resources from productive ideas.

To end on a more positive note, some are beginning to recognize the importance and direness of the situation. I have noted several times the work of Chairman Pai and Commissioner Carr. Indeed, the first time I met Chairman Pai was when I had the opportunity to accompany him, back when he was Commissioner Pai, on a visit through Diller, Nebraska (pop. 287). More recently, there has been bipartisan recognition of the need for new thinking about the rural digital divide. In February, for instance, a group of Democratic senators asked President Trump to prioritize rural broadband in his infrastructure plans. And the following month Congress enacted, and the President signed, legislation that among other things funded a $600 million pilot program to award grants and loans for rural broadband built out through the Department of Agriculture’s Rural Utilities Service. But both of these efforts rely too heavily on throwing money at the rural divide (speaking of the recent legislation, the head of one Nebraska-based carrier building out service in rural areas lamented that it’s just another effort to give carriers cheap money, which doesn’t do much to help close the divide!). It is, nonetheless, good to see urgent calls for and an interest in experimenting with new ways to deliver assistance in closing the rural digital divide. We need more of this sort of bipartisan thinking and willingness to experiment with new modes of meeting this challenge — and less advocacy for stale, entrenched, viewpoints that have little relevance to the on-the-ground reality of rural America.