Archives For wireless

The Department of Justice announced it has approved the $26 billion T-Mobile/Sprint merger. Once completed, the deal will create a mobile carrier with around 136 million customers in the U.S., putting it just behind Verizon (158 million) and AT&T (156 million).

While all the relevant federal government agencies have now approved the merger, it still faces a legal challenge from state attorneys general. At the very least, this challenge is likely to delay the merger; if successful, it could scupper it. In this blog post, we evaluate the state AG’s claims (and find them wanting).

Four firms good, three firms bad?

The state AG’s opposition to the T-Mobile/Sprint merger is based on a claim that a competitive mobile market requires four national providers, as articulated in their redacted complaint:

The Big Four MNOs [mobile network operators] compete on many dimensions, including price, network quality, network coverage, and features. The aggressive competition between them has resulted in falling prices and improved quality. The competition that currently takes place across those dimensions, and others, among the Big Four MNOs would be negatively impacted if the Merger were consummated. The effects of the harm to competition on consumers will be significant because the Big Four MNOs have wireless service revenues of more than $160 billion.

. . . 

Market consolidation from four to three MNOs would also serve to increase the possibility of tacit collusion in the markets for retail mobile wireless telecommunications services.

But there are no economic grounds for the assertion that a four firm industry is on a competitive tipping point. Four is an arbitrary number, offered up in order to squelch any further concentration in the industry.

A proper assessment of this transaction—as well as any other telecom merger—requires accounting for the specific characteristics of the markets affected by the merger. The accounting would include, most importantly, the dynamic, fast-moving nature of competition and the key role played by high fixed costs of production and economies of scale. This is especially important given the expectation that the merger will facilitate the launch of a competitive, national 5G network.

Opponents claim this merger takes us from four to three national carriers. But Sprint was never a serious participant in the launch of 5G. Thus, in terms of future investment in general, and the roll-out of 5G in particular, a better characterization is that it this deal takes the U.S. from two to three national carriers investing to build out next-generation networks.

In the past, the capital expenditures made by AT&T and Verizon have dwarfed those of T-Mobile and Sprint. But a combined T-Mobile/Sprint would be in a far better position to make the kinds of large-scale investments necessary to develop a nationwide 5G network. As a result, it is likely that both the urban-rural digital divide and the rich-poor digital divide will decline following the merger. And this investment will drive competition with AT&T and Verizon, leading to innovation, improving service and–over time–lowering the cost of access.

Is prepaid a separate market?

The state AGs complain that the merger would disproportionately affect consumers of prepaid plans, which they claim constitutes a separate product market:

There are differences between prepaid and postpaid service, the most notable being that individuals who cannot pass a credit check and/or who do not have a history of bill payment with a MNO may not be eligible for postpaid service. Accordingly, it is informative to look at prepaid mobile wireless telecommunications services as a separate segment of the market for mobile wireless telecommunications services.

Claims that prepaid services constitute a separate market are questionable, at best. While at one time there might have been a fairly distinct divide between pre and postpaid markets, today the line between them is at least blurry, and may not even be a meaningful divide at all.

To begin with, the arguments regarding any expected monopolization in the prepaid market appear to assume that the postpaid market imposes no competitive constraint on the prepaid market. 

But that can’t literally be true. At the very least, postpaid plans put a ceiling on prepaid prices for many prepaid users. To be sure, there are some prepaid consumers who don’t have the credit history required to participate in the postpaid market at all. But these are inframarginal consumers, and they will benefit from the extent of competition at the margins unless operators can effectively price discriminate in ways they have not in the past, and which has not been demonstrated is possible or likely.

One source of this competition will come from Dish, which has been a vocal critic of the T-Mobile/Sprint merger. Under the deal with DOJ, T-Mobile and Sprint must spin-off Sprint’s prepaid businesses to Dish. The divested products include Boost Mobile, Virgin Mobile, and Sprint prepaid. Moreover the deal requires Dish be allowed to use T-Mobile’s network during a seven-year transition period. 

Will the merger harm low-income consumers?

While the states’ complaint alleges that low-income consumers will suffer, it pays little attention to the so-called “digital divide” separating urban and rural consumers. This seems curious given the attention it was given in submissions to the federal agencies. For example, the Communication Workers of America opined:

the data in the Applicants’ Public Interest Statement demonstrates that even six years after a T-Mobile/Sprint merger, “most of New T-Mobile’s rural customers would be forced to settle for a service that has significantly lower performance than the urban and suburban parts of the network.” The “digital divide” is likely to worsen, not improve, post-merger.

This is merely an assertion, and a misleading assertion. To the extent the “digital divide” would grow following the merger, it would be because urban access will improve more rapidly than rural access would improve. 

Indeed, there is no real suggestion that the merger will impede rural access relative to a world in which T-Mobile and Sprint do not merge. 

And yet, in the absence of a merger, Sprint would be less able to utilize its own spectrum in rural areas than would the merged T-Mobile/Sprint, because utilization of that spectrum would require substantial investment in new infrastructure and additional, different spectrum. And much of that infrastructure and spectrum is already owned by T-Mobile. 

It likely that the combined T-Mobile/Sprint will make that investment, given the cost savings that are expected to be realized through the merger. So, while it might be true that urban customers will benefit more from the merger, rural customers will also benefit. It is impossible to know, of course, by exactly how much each group will benefit. But, prima facie, the prospect of improvement in rural access seems a strong argument in favor of the merger from a public interest standpoint.

The merger is also likely to reduce another digital divide: that between wealthier and poorer consumers in more urban areas. The proportion of U.S. households with access to the Internet has for several years been rising faster among those with lower incomes than those with higher incomes, thereby narrowing this divide. Since 2011, access by households earning $25,000 or less has risen from 52% to 62%, while access among the U.S. population as a whole has risen only from 72% to 78%. In part, this has likely resulted from increased mobile access (a greater proportion of Americans now access the Internet from mobile devices than from laptops), which in turn is the result of widely available, low-cost smartphones and the declining cost of mobile data.

Concluding remarks

By enabling the creation of a true, third national mobile (phone and data) network, the merger will almost certainly drive competition and innovation that will lead to better services at lower prices, thereby expanding access for all and, if current trends hold, especially those on lower incomes. Beyond its effect on the “digital divide” per se, the merger is likely to have broadly positive effects on access more generally.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

Unexpectedly, on the day that the white copy of the upcoming repeal of the 2015 Open Internet Order was published, a mobile operator in Portugal with about 7.5 million subscribers is garnering a lot of attention. Curiously, it’s not because Portugal is a beautiful country (Iker Casillas’ Instagram feed is dope) nor because Portuguese is a beautiful romance language.

Rather it’s because old-fashioned misinformation is being peddled to perpetuate doomsday images that Portuguese ISPs have carved the Internet into pieces — and if the repeal of the 2015 Open Internet Order passes, the same butchery is coming to an AT&T store near you.

Much ado about data

This tempest in the teacup is about mobile data plans, specifically the ability of mobile subscribers to supplement their data plan (typically ranging from 200 MB to 3 GB per month) with additional 10 GB data packages containing specific bundles of apps – messaging apps, social apps, video apps, music apps, and email and cloud apps. Each additional 10 GB data package costs EUR 6.99 per month and Meo (the mobile operator) also offers its own zero rated apps. Similar plans have been offered in Portugal since at least 2012.

Screen Shot 2017-11-22 at 3.39.21 PM

These data packages are a clear win for mobile subscribers, especially pre-paid subscribers who tend to be at a lower income level than post-paid subscribers. They allow consumers to customize their plan beyond their mobile broadband subscription, enabling them to consume data in ways that are better attuned to their preferences. Without access to these data packages, consuming an additional 10 GB of data would cost each user an additional EUR 26 per month and require her to enter into a two year contract.

These discounted data packages also facilitate product differentiation among mobile operators that offer a variety of plans. Keeping with the Portugal example, Vodafone Portugal offers 20 GB of additional data for certain apps (Facebook, Instagram, SnapChat, and Skype, among others) with the purchase of a 3 GB mobile data plan. Consumers can pick which operator offers the best plan for them.

In addition, data packages like the ones in question here tend to increase the overall consumption of content, reduce users’ cost of obtaining information, and allow for consumers to experiment with new, less familiar apps. In short, they are overwhelmingly pro-consumer.

Even if Portugal actually didn’t have net neutrality rules, this would be the furthest thing from the apocalypse critics make it out to be.

Screen Shot 2017-11-22 at 6.51.36 PM

Net Neutrality in Portugal

But, contrary to activists’ misinformation, Portugal does have net neutrality rules. The EU implemented its net neutrality framework in November 2015 as a regulation, meaning that the regulation became the law of the EU when it was enacted, and national governments, including Portugal, did not need to transpose it into national legislation.

While the regulation was automatically enacted in Portugal, the regulation and the 2016 EC guidelines left the decision of whether to allow sponsored data and zero rating plans (the Regulation likely classifies data packages at issue here to be zero rated plans because they give users a lot of data for a low price) in the hands of national regulators. While Portugal is still formulating the standard it will use to evaluate sponsored data and zero rating under the EU’s framework, there is little reason to think that this common practice would be disallowed in Portugal.

On average, in fact, despite its strong net neutrality regulation, the EU appears to be softening its stance toward zero rating. This was evident in a recent EC competition policy authority (DG-Comp) study concluding that there is little reason to believe that such data practices raise concerns.

The activists’ willful misunderstanding of clearly pro-consumer data plans and purposeful mischaracterization of Portugal as not having net neutrality rules are inflammatory and deceitful. Even more puzzling for activists (but great for consumers) is their position given there is nothing in the 2015 Open Internet Order that would prevent these types of data packages from being offered in the US so long as ISPs are transparent with consumers.

It’s fitting that FCC Chairman Ajit Pai recently compared his predecessor’s jettisoning of the FCC’s light touch framework for Internet access regulation without hard evidence to the Oklahoma City Thunder’s James Harden trade. That infamous deal broke up a young nucleus of three of the best players in the NBA in 2012 because keeping all three might someday create salary cap concerns. What few saw coming was a new TV deal in 2015 that sent the salary cap soaring.

If it’s hard to predict how the market will evolve in the closed world of professional basketball, predictions about the path of Internet innovation are an order of magnitude harder — especially for those making crucial decisions with a lot of money at stake.

The FCC’s answer for what it considered to be the dangerous unpredictability of Internet innovation was to write itself a blank check of authority to regulate ISPs in the 2015 Open Internet Order (OIO), embodied in what is referred to as the “Internet conduct standard.” This standard expanded the scope of Internet access regulation well beyond the core principle of preserving openness (i.e., ensuring that any legal content can be accessed by all users) by granting the FCC the unbounded, discretionary authority to define and address “new and novel threats to the Internet.”

When asked about what the standard meant (not long after writing it), former Chairman Tom Wheeler replied,

We don’t really know. We don’t know where things will go next. We have created a playing field where there are known rules, and the FCC will sit there as a referee and will throw the flag.

Somehow, former Chairman Wheeler would have us believe that an amorphous standard that means whatever the agency (or its Enforcement Bureau) says it means created a playing field with “known rules.” But claiming such broad authority is hardly the light-touch approach marketed to the public. Instead, this ill-conceived standard allows the FCC to wade as deeply as it chooses into how an ISP organizes its business and how it manages its network traffic.

Such an approach is destined to undermine, rather than further, the objectives of Internet openness, as embodied in Chairman Powell’s 2005 Internet Policy Statement:

To foster creation, adoption and use of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.

Instead, the Internet conduct standard is emblematic of how an off-the-rails quest to heavily regulate one specific component of the complex Internet ecosystem results in arbitrary regulatory imbalances — e.g., between ISPs and over-the-top (OTT) or edge providers that offer similar services such as video streaming or voice calling.

As Boston College Law Professor, Dan Lyons, puts it:

While many might assume that, in theory, what’s good for Netflix is good for consumers, the reality is more complex. To protect innovation at the edge of the Internet ecosystem, the Commission’s sweeping rules reduce the opportunity for consumer-friendly innovation elsewhere, namely by facilities-based broadband providers.

This is no recipe for innovation, nor does it coherently distinguish between practices that might impede competition and innovation on the Internet and those that are merely politically disfavored, for any reason or no reason at all.

Free data madness

The Internet conduct standard’s unholy combination of unfettered discretion and the impulse to micromanage can (and will) be deployed without credible justification to the detriment of consumers and innovation. Nowhere has this been more evident than in the confusion surrounding the regulation of “free data.”

Free data, like T-Mobile’s Binge On program, is data consumed by a user that has been subsidized by a mobile operator or a content provider. The vertical arrangements between operators and content providers creating the free data offerings provide many benefits to consumers, including enabling subscribers to consume more data (or, for low-income users, to consume data in the first place), facilitating product differentiation by mobile operators that offer a variety of free data plans (including allowing smaller operators the chance to get a leg up on competitors by assembling a market-share-winning plan), increasing the overall consumption of content, and reducing users’ cost of obtaining information. It’s also fundamentally about experimentation. As the International Center for Law & Economics (ICLE) recently explained:

Offering some services at subsidized or zero prices frees up resources (and, where applicable, data under a user’s data cap) enabling users to experiment with new, less-familiar alternatives. Where a user might not find it worthwhile to spend his marginal dollar on an unfamiliar or less-preferred service, differentiated pricing loosens the user’s budget constraint, and may make him more, not less, likely to use alternative services.

In December 2015 then-Chairman Tom Wheeler used his newfound discretion to launch a 13-month “inquiry” into free data practices before preliminarily finding some to be in violation of the standard. Without identifying any actual harm, Wheeler concluded that free data plans “may raise” economic and public policy issues that “may harm consumers and competition.”

After assuming the reins at the FCC, Chairman Pai swiftly put an end to that nonsense, saying that the Commission had better things to do (like removing barriers to broadband deployment) than denying free data plans that expand Internet access and are immensely popular, especially among low-income Americans.

The global morass of free data regulation

But as long as the Internet conduct standard remains on the books, it implicitly grants the US’s imprimatur to harmful policies and regulatory capriciousness in other countries that look to the US for persuasive authority. While Chairman Pai’s decisive intervention resolved the free data debate in the US (at least for now), other countries are still grappling with whether to prohibit the practice, allow it, or allow it with various restrictions.

In Europe, the 2016 EC guidelines left the decision of whether to allow the practice in the hands of national regulators. Consequently, some regulators — in Hungary, Sweden, and the Netherlands (although there the ban was recently overturned in court) — have banned free data practices  while others — in Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine — have not. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs, a state of affairs that is compounded by a lack of data on the consequences of various approaches to their regulation.

In Canada this year, the CRTC issued a decision adopting restrictive criteria under which to evaluate free data plans. The criteria include assessing the degree to which the treatment of data is agnostic, whether the free data offer is exclusive to certain customers or certain content providers, the impact on Internet openness and innovation, and whether there is financial compensation involved. The standard is open-ended, and free data plans as they are offered in the US would “likely raise concerns.”

Other regulators are contributing to the confusion through ambiguously framed rules, such as that of the Chilean regulator, Subtel. In a 2014 decision, it found that a free data offer of specific social network apps was in breach of Chile’s Internet rules. In contrast to what is commonly reported, however, Subtel did not ban free data. Instead, it required mobile operators to change how they promote such services, requiring them to state that access to Facebook, Twitter and WhatsApp were offered “without discounting the user’s balance” instead of “at no cost.” It also required them to disclose the amount of time the offer would be available, but imposed no mandatory limit.

In addition to this confusing regulatory make-work governing how operators market free data plans, the Chilean measures also require that mobile operators offer free data to subscribers who pay for a data plan, in order to ensure free data isn’t the only option users have to access the Internet.

The result is that in Chile today free data plans are widely offered by Movistar, Claro, and Entel and include access to apps such as Facebook, WhatsApp, Twitter, Instagram, Pokemon Go, Waze, Snapchat, Apple Music, Spotify, Netflix or YouTube — even though Subtel has nominally declared such plans to be in violation of Chile’s net neutrality rules.

Other regulators are searching for palatable alternatives to both flex their regulatory muscle to govern Internet access, while simultaneously making free data work. The Indian regulator, TRAI, famously banned free data in February 2016. But the story doesn’t end there. After seeing the potential value of free data in unserved and underserved, low-income areas, TRAI proposed implementing government-sanctioned free data. The proposed scheme would provide rural subscribers with 100 MB of free data per month, funded through the country’s universal service fund. To ensure that there would be no vertical agreements between content providers and mobile operators, TRAI recommended introducing third parties, referred to as “aggregators,” that would facilitate mobile-operator-agnostic arrangements.

The result is a nonsensical, if vaguely well-intentioned, threading of the needle between the perceived need to (over-)regulate access providers and the determination to expand access. Notwithstanding the Indian government’s awareness that free data will help to close the digital divide and enhance Internet access, in other words, it nonetheless banned private markets from employing private capital to achieve that very result, preferring instead non-market processes which are unlikely to be nearly as nimble or as effective — and yet still ultimately offer “non-neutral” options for consumers.

Thinking globally, acting locally (by ditching the Internet conduct standard)

Where it is permitted, free data is undergoing explosive adoption among mobile operators. Currently in the US, for example, all major mobile operators offer some form of free data or unlimited plan to subscribers. And, as a result, free data is proving itself as a business model for users’ early stage experimentation and adoption of augmented reality, virtual reality and other cutting-edge technologies that represent the Internet’s next wave — but that also use vast amounts of data. Were the US to cut off free data at the legs under the OIO absent hard evidence of harm, it would substantially undermine this innovation.

The application of the nebulous Internet conduct standard to free data is a microcosm of the current incoherence: It is a rule rife with a parade of uncertainties and only theoretical problems, needlessly saddling companies with enforcement risk, all in the name of preserving and promoting innovation and openness. As even some of the staunchest proponents of net neutrality have recognized, only companies that can afford years of litigation can be expected to thrive in such an environment.

In the face of confusion and uncertainty globally, the US is now poised to provide leadership grounded in sound policy that promotes innovation. As ICLE noted last month, Chairman Pai took a crucial step toward re-imposing economic rigor and the rule of law at the FCC by questioning the unprecedented and ill-supported expansion of FCC authority that undergirds the OIO in general and the Internet conduct standard in particular. Today the agency will take the next step by voting on Chairman Pai’s proposed rulemaking. Wherever the new proceeding leads, it’s a welcome opportunity to analyze the issues with a degree of rigor that has thus far been appallingly absent.

And we should not forget that there’s a direct solution to these ambiguities that would avoid the undulations of subsequent FCC policy fights: Congress could (and should) pass legislation implementing a regulatory framework grounded in sound economics and empirical evidence that allows for consumers to benefit from the vast number of procompetitive vertical agreements (such as free data plans), while still facilitating a means for policing conduct that may actually harm consumers.

The Golden State Warriors are the heavy odds-on favorite to win another NBA Championship this summer, led by former OKC player Kevin Durant. And James Harden is a contender for league MVP. We can’t always turn back the clock on a terrible decision, hastily made before enough evidence has been gathered, but Chairman Pai’s efforts present a rare opportunity to do so.

Netflix’s latest net neutrality hypocrisy (yes, there have been others. See here and here, for example) involves its long-term, undisclosed throttling of its video traffic on AT&T’s and Verizon’s wireless networks, while it lobbied heavily for net neutrality rules from the FCC that would prevent just such throttling by ISPs.

It was Netflix that coined the term “strong net neutrality,” in an effort to import interconnection (the connections between ISPs and edge provider networks) into the net neutrality fold. That alone was a bastardization of what net neutrality purportedly stood for, as I previously noted:

There is a reason every iteration of the FCC’s net neutrality rules, including the latest, have explicitly not applied to backbone interconnection agreements: Interconnection over the backbone has always been open and competitive, and it simply doesn’t give rise to the kind of discrimination concerns net neutrality is meant to address.

That Netflix would prefer not to pay for delivery of its content isn’t surprising. But net neutrality regulations don’t — and shouldn’t — have anything to do with it.

But Netflix did something else with “strong net neutrality.” It tied it to consumer choice:

This weak net neutrality isn’t enough to protect an open, competitive Internet; a stronger form of net neutrality is required. Strong net neutrality additionally prevents ISPs from charging a toll for interconnection to services like Netflix, YouTube, or Skype, or intermediaries such as Cogent, Akamai or Level 3, to deliver the services and data requested by ISP residential subscribers. Instead, they must provide sufficient access to their network without charge. (Emphasis added).

A focus on consumers is laudable, of course, but when the focus is on consumers there’s no reason to differentiate between ISPs (to whom net neutrality rules apply) and content providers entering into contracts with ISPs to deliver their content (to whom net neutrality rules don’t apply).

And Netflix has just showed us exactly why that’s the case.

Netflix can and does engage in management of its streams in order (presumably) to optimize consumer experience as users move between networks, devices and viewers (e.g., native apps vs Internet browser windows) with very different characteristics and limitations. That’s all well and good. But as we noted in our Policy Comments in the FCC’s Open Internet Order proceeding,

In this circumstance, particularly when the content in question is Netflix, with 30% of network traffic, both the network’s and the content provider’s transmission decisions may be determinative of network quality, as may the users’ device and application choices.

As a 2011 paper by a group of network engineers studying the network characteristics of video streaming data from Netflix and YouTube noted:

This is a concern as it means that a sudden change of application or container in a large population might have a significant impact on the network traffic. Considering the very fast changes in trends this is a real possibility, the most likely being a change from Flash to HTML5 along with an increase in the use of mobile devices…. [S]treaming videos at high resolutions can result in smoother aggregate traffic while at the same time linearly increase the aggregate data rate due to video streaming.

Again, a concern with consumers is admirable, but Netflix isn’t concerned with consumers. It’s concerned at most with consumers of Netflix, while they are consuming Netflix. But the reality is that Netflix’s content management decisions can adversely affect consumers overall, including its own subscribers when they aren’t watching Netflix.

And here’s the huge irony. The FCC’s net neutrality rules are tailor-made to guarantee that Netflix will never have any incentive to take these externalities into account in its own decisions. What’s more, they ensure that ISPs are severely hamstrung in managing their networks for the benefit of all consumers, not least because their interconnection deals with large content providers like Netflix are now being closely scrutinized.

It’s great that Netflix thinks it should manage its video delivery to optimize viewing under different network conditions. But net neutrality rules ensure that Netflix bears no cost for overwhelming the network in the process. Essentially, short of building new capacity — at great expense to all ISP subscribers, of course — ISPs can’t do much about it, either, under the rules. And, of course, the rules also make it impossible for ISPs to negotiate for financial help from Netflix (or its heaviest users) in paying for those upgrades.

On top of this, net neutrality advocates have taken aim at usage-based billing and other pricing practices that would help with the problem by enabling ISPs to charge their heaviest users more in order to alleviate the inherent subsidy by normal users that flat-rate billing entails. (Netflix itself, as one of the articles linked above discusses at length, is hypocritically inconsistent on this score).

As we also noted in our OIO Policy Comments:

The idea that consumers and competition generally are better off when content providers face no incentive to take account of congestion externalities in their pricing (or when users have no incentive to take account of their own usage) runs counter to basic economic logic and is unsupported by the evidence. In fact, contrary to such claims, usage-based pricing, congestion pricing and sponsored content, among other nonlinear pricing models, would, in many circumstances, further incentivize networks to expand capacity (not create artificial scarcity).

Some concern for consumers. Under Netflix’s approach consumers get it coming and going: Either their non-Netflix traffic is compromised for the sake of Netflix’s traffic, or they have to pay higher subscription fees to ISPs for the privilege of accommodating Netflix’s ever-expanding traffic loads (4K videos, anyone?) — whether they ever use Netflix or not.

Sometimes, apparently, Netflix throttles its own traffic in order to “help” a few consumers. (That it does so without disclosing the practice is pretty galling, especially given the enhanced transparency rules in the Open Internet Order — something Netflix also advocated for, and which also apply only to ISPs and not to content providers). But its self-aggrandizing advocacy for the FCC’s latest net neutrality rules reveals that its first priority is to screw over consumers, so long as it can shift the blame and the cost to others.

It’s easy to look at the net neutrality debate and assume that everyone is acting in their self-interest and against consumer welfare. Thus, many on the left denounce all opposition to Title II as essentially “Comcast-funded,” aimed at undermining the Open Internet to further nefarious, hidden agendas. No matter how often opponents make the economic argument that Title II would reduce incentives to invest in the network, many will not listen because they have convinced themselves that it is simply special-interest pleading.

But whatever you think of ISPs’ incentives to oppose Title II, the incentive for the tech companies (like Cisco, Qualcomm, Nokia and IBM) that design and build key elements of network infrastructure and the devices that connect to it (i.e., essential input providers) is to build out networks and increase adoption (i.e., to expand output). These companies’ fundamental incentive with respect to regulation of the Internet is the adoption of rules that favor investment. They operate in highly competitive markets, they don’t offer competing content and they don’t stand as alleged “gatekeepers” seeking monopoly returns from, or control over, what crosses over the Interwebs.

Thus, it is no small thing that 60 tech companies — including some of the world’s largest, based both in the US and abroad — that are heavily invested in the buildout of networks and devices, as well as more than 100 manufacturing firms that are increasingly building the products and devices that make up the “Internet of Things,” have written letters strongly opposing the reclassification of broadband under Title II.

There is probably no more objective evidence that Title II reclassification will harm broadband deployment than the opposition of these informed market participants.

These companies have the most to lose from reduced buildout, and no reasonable nefarious plots can be constructed to impugn their opposition to reclassification as consumer-harming self-interest in disguise. Their self-interest is on their sleeves: More broadband deployment and adoption — which is exactly what the Open Internet proceedings are supposed to accomplish.

If the FCC chooses the reclassification route, it will most assuredly end up in litigation. And when it does, the opposition of these companies to Title II should be Exhibit A in the effort to debunk the FCC’s purported basis for its rules: the “virtuous circle” theory that says that strong net neutrality rules are necessary to drive broadband investment and deployment.

Access to all the wonderful content the Internet has brought us is not possible without the billions of dollars that have been invested in building the networks and devices themselves. Let’s not kill the goose that lays the golden eggs.

Today the D.C. Circuit struck down most of the FCC’s 2010 Open Internet Order, rejecting rules that required broadband providers to carry all traffic for edge providers (“anti-blocking”) and prevented providers from negotiating deals for prioritized carriage. However, the appeals court did conclude that the FCC has statutory authority to issue “Net Neutrality” rules under Section 706(a) and let stand the FCC’s requirement that broadband providers clearly disclose their network management practices.

The following statement may be attributed to Geoffrey Manne and Berin Szoka:

The FCC may have lost today’s battle, but it just won the war over regulating the Internet. By recognizing Section 706 as an independent grant of statutory authority, the court has given the FCC near limitless power to regulate not just broadband, but the Internet itself, as Judge Silberman recognized in his dissent.

The court left the door open for the FCC to write new Net Neutrality rules, provided the Commission doesn’t treat broadband providers as common carriers. This means that, even without reclassifying broadband as a Title II service, the FCC could require that any deals between broadband and content providers be reasonable and non-discriminatory, just as it has required wireless carriers to provide data roaming services to their competitors’ customers on that basis. In principle, this might be a sound approach, if the rule resembles antitrust standards. But even that limitation could easily be evaded if the FCC regulates through case-by-case enforcement actions, as it tried to do before issuing the Open Internet Order. Either way, the FCC need only make a colorable argument under Section 706 that its actions are designed to “encourage the deployment… of advanced telecommunications services.” If the FCC’s tenuous “triple cushion shot” argument could satisfy that test, there is little limit to the deference the FCC will receive.

But that’s just for Net Neutrality. Section 706 covers “advanced telecommunications,” which seems to include any information service, from broadband to the interconnectivity of smart appliances like washing machines and home thermostats. If the court’s ruling on Section 706 is really as broad as it sounds, and as the dissent fears, the FCC just acquired wide authority over these, as well — in short, the entire Internet, including the “Internet of Things.” While the court’s “no common carrier rules” limitation is a real one, the FCC clearly just gained enormous power that it didn’t have before today’s ruling.

Today’s decision essentially rewrites the Communications Act in a way that will, ironically, do the opposite of what the FCC claims: hurt, not help, deployment of new Internet services. Whatever the FCC’s role ought to be, such decisions should be up to our elected representatives, not three unelected FCC Commissioners. So if there’s a silver lining in any of this, it may be that the true implications of today’s decision are so radical that Congress finally writes a new Communications Act — a long-overdue process Congressmen Fred Upton and Greg Walden have recently begun.

Szoka and Manne are available for comment at media@techfreedom.org. Find/share this release on Facebook or Twitter.

For those in the DC area interested in telecom regulation, there is another great event opportunity coming up next week.

Join TechFreedom on Thursday, December 19, the 100th anniversary of the Kingsbury Commitment, AT&T’s negotiated settlement of antitrust charges brought by the Department of Justice that gave AT&T a legal monopoly in most of the U.S. in exchange for a commitment to provide universal service.

The Commitment is hailed by many not just as a milestone in the public interest but as the bedrock of U.S. communications policy. Others see the settlement as the cynical exploitation of lofty rhetoric to establish a tightly regulated monopoly — and the beginning of decades of cozy regulatory capture that stifled competition and strangled innovation.

So which was it? More importantly, what can we learn from the seventy year period before the 1984 break-up of AT&T, and the last three decades of efforts to unleash competition? With fewer than a third of Americans relying on traditional telephony and Internet-based competitors increasingly driving competition, what does universal service mean in the digital era? As Congress contemplates overhauling the Communications Act, how can policymakers promote universal service through competition, by promoting innovation and investment? What should a new Kingsbury Commitment look like?

Following a luncheon keynote address by FCC Commissioner Ajit Pai, a diverse panel of experts moderated by TechFreedom President Berin Szoka will explore these issues and more. The panel includes:

  • Harold Feld, Public Knowledge
  • Rob Atkinson, Information Technology & Innovation Foundation
  • Hance Haney, Discovery Institute
  • Jeff Eisenach, American Enterprise Institute
  • Fred Campbell, Former FCC Commissioner

Space is limited so RSVP now if you plan to attend in person. A live stream of the event will be available on this page. You can follow the conversation on Twitter on the #Kingsbury100 hashtag.

When:
Thursday, December 19, 2013
11:30 – 12:00 Registration & lunch
12:00 – 1:45 Event & live stream

The live stream will begin on this page at noon Eastern.

Where:
The Methodist Building
100 Maryland Ave NE
Washington D.C. 20002

Questions?
Email contact@techfreedom.org.

I have a new post up at TechPolicyDaily.com, excerpted below, in which I discuss the growing body of (surprising uncontroversial) work showing that broadband in the US compares favorably to that in the rest of the world. My conclusion, which is frankly more cynical than I like, is that concern about the US “falling behind” is manufactured debate. It’s a compelling story that the media likes and that plays well for (some) academics.

Before the excerpt, I’d also like to quote one of today’s headlines from Slashdot:

“Google launched the citywide Wi-Fi network with much fanfare in 2006 as a way for Mountain View residents and businesses to connect to the Internet at no cost. It covers most of the Silicon Valley city and worked well until last year, as Slashdot readers may recall, when connectivity got rapidly worse. As a result, Mountain View is installing new Wi-Fi hotspots in parts of the city to supplement the poorly performing network operated by Google. Both the city and Google have blamed the problems on the design of the network. Google, which is involved in several projects to provide Internet access in various parts of the world, said in a statement that it is ‘actively in discussions with the Mountain View city staff to review several options for the future of the network.'”

The added emphasis is mine. It is added to draw attention to the simple point that designing and building networks is hard. Like, really really hard. Folks think that it’s easy, because they have small networks in their homes or offices — so surely they can scale to a nationwide network without much trouble. But all sorts of crazy stuff starts to happen when we substantially increase the scale of IP networks. This is just one of the very many things that should give us pause about calls for the buildout of a government run or sponsored Internet infrastructure.

Another of those things is whether there’s any need for that. Which brings us to my TechPolicyDaily.com post:

In the week or so since TPRC, I’ve found myself dwelling on an observation I made during the conference: how much agreement there was, especially on issues usually thought of as controversial. I want to take a few paragraphs to consider what was probably the most surprisingly non-controversial panel of the conference, the final Internet Policy panel, in which two papers – one by ITIF’s Rob Atkinson and the other by James McConnaughey from NTIA – were presented that showed that broadband Internet service in US (and Canada, though I will focus on the US) compares quite well to that offered in the rest of the world. […]

But the real question that this panel raised for me was: given how well the US actually compares to other countries, why does concern about the US falling behind dominate so much discourse in this area? When you get technical, economic, legal, and policy experts together in a room – which is what TPRC does – the near consensus seems to be that the “kids are all right”; but when you read the press, or much of the high-profile academic literature, “the sky is falling.”

The gap between these assessments could not be larger. I think that we need to think about why this is. I hate to be cynical or disparaging – especially since I know strong advocates on both sides and believe that their concerns are sincere and efforts earnest. But after this year’s conference, I’m having trouble shaking the feeling that ongoing concern about how US broadband stacks up to the rest of the world is a manufactured debate. It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. […]

Compare this to the Chicken Little narrative. As I was writing this, I received a message from a friend asking my views on an Economist blog post that shares data from the ITU’s just-released Measuring the Information Society 2013 report. This data shows that the US has some of the highest prices for pre-paid handset-based mobile data around the world. That is, it reports the standard narrative – and it does so without looking at the report’s methodology. […]

Even more problematic than what the Economist blog reports, however, is what it doesn’t report. [The report contains data showing the US has some of the lowest cost fixed broadband and mobile broadband prices in the world. See the full post at TechPolicyDaily.com for the numbers.]

Now, there are possible methodological problems with these rankings, too. My point here isn’t to debate over the relative position of the United States. It’s to ask why the “story” about this report cherry-picks the alarming data, doesn’t consider its methodology, and ignores the data that contradicts its story.

Of course, I answered that question above: It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. Manufacturing debate sells copy and ads, and advances careers.

Susan Crawford recently received the OneCommunity Broadband Hero Award for being a “tireless advocate for 21st century high capacity network access.” In her recent debate with Geoffrey Manne and Berin Szoka, she emphasized that there is little competition in broadband or between cable broadband and wireless, asserting that the main players have effectively divided the markets. As a result, she argues (as she did here at 17:29) that broadband and wireless providers “are deciding not to invest in the very expensive infrastructure because they are very happy with the profits they are getting now.” In the debate, Manne countered by pointing to substantial investment and innovation in both the wired and wireless broadband marketplaces, and arguing that this is not something monopolists insulated from competition do. So, who’s right?

The recently released 2013 Progressive Policy Institute Report, U.S. Investment Heroes of 2013: The Companies Betting on America’s Future, has two useful little tables that lend support to Manne’s counterargument.

skitch

The first shows the top 25 investors that are nonfinancial companies, and guess who comes in 1st, 2nd, 10th, 13th, and 17th place? None other than AT&T, Verizon Communications, Comcast, Sprint Nextel, and Time Warner, respectively.

skatch

And when the table is adjusted by removing non-energy companies, the ranks become 1st, 2nd, 5th, 6th, and 9th. In fact, cable and telecom combined to invest over $50.5 billion in 2012.

This high level of investment by supposed monopolists is not a new development. The Progressive Policy Institute’s 2012 Report, Investment Heroes: Who’s Betting on America’s Future? indicates that the same main players have been investing heavily for years. Since 1996, the cable industry has invested over $200 billion into infrastructure alone. These investments have allowed 99.5% of Americans to have access to broadband – via landline, wireless, or both – as of the end of 2012.

There’s more. Not only has there been substantial investment that has increased access, but the speeds of service have increased dramatically over the past few years. The National Broadband Map data show that by the end of 2012:

  • Landline service ≧ 25 megabits per second download available to 81.7% of households, up from 72.9% at the end of 2011 and 58.4% at the end of 2010
  • Landline service ≧ 100 megabits per second download available to 51.5% of households, up from 43.4% at the end of 2011 and only 12.9% at the end of 2010
  • ≧ 1 gigabit per second download available to 6.8% of households, predominantly via fiber
  • Fiber at any speed was available to 22.9% of households, up from 16.8% at the end of 2011 and 14.8% at the end of 2010
  • Landline broadband service at the 3 megabits / 768 kilobits threshold available to 93.4% of households, up from 92.8% at the end of 2011
  • Mobile wireless broadband at the 3 megabits / 768 kilobits threshold available to 94.1% of households , up from 75.8% at the end of 2011
  • Access to mobile wireless broadband providing ≧ 10 megabits per second download has grown to 87%, up from 70.6 percent at the end of 2011 and 8.9 percent at the end of 2010
  • Landline broadband ≧ 10 megabits download was available to 91.1% of households

This leaves only one question: Will the real broadband heroes please stand up?

On Tuesday the European Commission opened formal proceedings against Motorola Mobility based on its patent licensing practices surrounding some of its core cellular telephony, Internet video and Wi-fi technology. The Commission’s concerns, echoing those raised by Microsoft and Apple, center on Motorola’s allegedly high royalty rates and its efforts to use injunctions to enforce the “standards-essential patents” at issue.

As it happens, this development is just the latest, like so many in the tech world these days, in Microsoft’s ongoing regulatory, policy and legal war against Google, which announced in August it was planning to buy Motorola.

Microsoft’s claim and the Commission’s concern that Motorola’s royalty offer was, in Microsoft’s colorful phrase, “so over-reaching that no rational company could ever have accepted it or even viewed it as a legitimate offer,” is misplaced. Motorola is seeking a royalty rate for its patents that is seemingly in line with customary rates.

In fact, Microsoft’s claim that Motorola’s royalty ask is extraordinary is refuted by its own conduct. As one commentator notes:

Microsoft complained that it might have to pay a tribute of up to $22.50 for every $1,000 laptop sold, and suggested that it might be fairer to pay just a few cents. This is the firm that is thought to make $10 to $15 from every $500 Android device that is sold, and for a raft of trivial software patents, not standard essential ones.

Seemingly forgetting this, Microsoft criticizes Motorola’s royalty ask on its 50 H.264 video codec patents by comparing it to the amount Microsoft pays for more than 2000 other patents in the video codec’s patent pool, claiming that the former would cost it $4 billion while the latter costs it only $6.5 million. But this is comparing apples and oranges. It is not surprising to find some patents worth orders of magnitude more than others and to find that license rates are a complicated function of the contracting parties’ particular negotiating positions and circumstances. It is no more inherently inappropriate for Microsoft to rake in 2-3% of the price of every Nook Barnes and Nobles sells than it is for Motorola to net 2.25% of the price of each Windows-operated computer sold – which is the royalty rate Motorola is seeking and which Microsoft wants declared anticompetitive out of hand.

It’s not clear how much negotiation, if any, has taken place between the companies over the terms of Microsoft’s licensing of Motorola’s patents, but what is clear is that Microsoft’s complaint, echoed by the EC, is based on the size of Motorola’s initial royalty demand and its use of a legal injunction to enforce its patent rights. Unfortunately, neither of these is particularly problematic, especially in an environment where companies like Microsoft and Apple aggressively wield exactly such tools to gain a competitive negotiating edge over their own competitors.

The court adjudicating this dispute in the ongoing litigation in U.S. district court in Washington has thus far agreed. The court denied Microsoft’s request for summary judgment that Motorola’s royalty demand violated its RAND commitment, noting its disagreement with Microsoft’s claim that “it is always facially unreasonable for a proposed royalty rate to result in a larger royalty payment for products that have higher end prices. Indeed, Motorola has previously entered into licensing agreements for its declared-essential patents at royalty rates similar to those offered to Microsoft and with royalty rates based on the price of the end product.”

The staggering aggregate numbers touted by Microsoft in its complaint and repeated by bloggers and journalists the world over are not a function of Motorola seeking an exorbitant royalty but rather a function Microsoft’s selling a lot of operating systems and earning a lot of revenue doing it. While the aggregate number ($4 billion, according to Microsoft) is huge, it is, as the court notes, based on a royalty rate that is in line with similar agreements.

The court also takes issue with Microsoft’s contention that the mere offer of allegedly unreasonable terms constitutes a breach of Motorola’s RAND commitment to license its patents on commercially reasonable terms. Quite sensibly, the court notes:

[T]he court is mindful that at the time of an initial offer, it is difficult for the offeror to know what would in fact constitute RAND terms for the offeree. Thus, what may appear to be RAND terms from the offeror’s perspective may be rejected out-of-pocket as non-RAND terms by the offeree. Indeed, it would appear that at any point in the negotiation process, the parties may have a genuine disagreement as to what terms and conditions of a license constitute RAND under the parties’ unique circumstances.

Resolution of such an impasse may ultimately fall to the courts. Thus the royalty rate issue is in fact closely related to the second issue raised by the EC’s investigation: the use or threat of injunction to enforce standards-essential patents.

While some scholars and many policy advocates claim that injunctions in the standards context raise the specter of costly hold-ups (patent holders extracting not only the market value of their patent, but also a portion of the costs that the infringer would incur if it had to implement its technology without the patent), there is no empirical evidence supporting the claim that patent holdup is a pervasive problem.

And the theory doesn’t comfortably support such a claim, either. Motorola, for example, has no interest in actually enforcing an injunction: Doing so is expensive and, notably, not nearly as good for the bottom line as actually receiving royalties from an agreed-upon contract. Instead, injunctions are, just like the more-attenuated liability suit for patent infringement, a central aspect of our intellectual property system, the means by which innovators and their financiers can reasonably expect a return on their substantial up-front investments in technology development.

Moreover, and apparently unbeknownst to those who claim that injunctions are the antithesis of negotiated solutions to licensing contests, the threat of injunction actually facilitates efficient transacting. Injunctions provide clearer penalties than damage awards for failing to reach consensus and are thus better at getting both parties on to the table with matched expectations. And this is especially true in the standards-setting context where the relevant parties are generally repeat players and where they very often have both patents to license and the need to license patents from the standard—both of which help to induce everyone to come to the table, lest they find themselves closed off from patents essential to their own products.

Antitrust intervention in standard setting negotiations based on an allegedly high initial royalty rate offer or the use of an injunction to enforce a patent is misdirected and costly. One of the clearest statements of the need for antitrust restraint in the standard setting context comes from a June 2011 comment filed with the FTC:

[T]he existence of a RAND commitment to offer patent licenses should not preclude a patent holder from seeking preliminary injunctive relief. . . . Any uniform declaration that such relief would not be available if the patent holder has made a commitment to offer a RAND license for its essential patent claims in connection with a standard may reduce any incentives that implementers might have to engage in good faith negotiations with the patent holder.

Most of the SSOs and their stakeholders that have considered these proposals over the years have determined that there are only a limited number of situations where patent hold-up takes place in the context of standards-setting. The industry has determined that those situations generally are best addressed through bi-lateral negotiation (and, in rare cases, litigation) as opposed to modifying the SSO’s IPR policy [by precluding injunctions or mandating a particular negotiation process].

The statement’s author? Why, Microsoft, of course.

Patents are an important tool for encouraging the development and commercialization of advanced technology, as are standard setting organizations. Antitrust authorities should exercise great restraint before intervening in the complex commercial negotiations over technology patents and standards. In Motorola’s case, the evidence of conduct that might harm competition is absent, and all that remains are, in essence, allegations that Motorola is bargaining hard and enforcing its property rights. The EC should let competition run its course.

The DOJ’s recent press release on the Google/Motorola, Rockstar Bidco, and Apple/ Novell transactions struck me as a bit odd when I read it.  As I’ve now had a bit of time to digest it, I’ve grown to really dislike it.  For those who have not followed Jorge Contreras had an excellent summary of events at Patently-O.

For those of us who have been following the telecom patent battles, something remarkable happened a couple of weeks ago.  On February 7, the Wall St. Journal reported that, back in November, Apple sent a letter[1] to the European Telecommunications Standards Institute (ETSI) setting forth Apple’s position regarding its commitment to license patents essential to ETSI standards.  In particular, Apple’s letter clarified its interpretation of the so-called “FRAND” (fair, reasonable and non-discriminatory) licensing terms that ETSI participants are required to use when licensing standards-essential patents.  As one might imagine, the actual scope and contours of FRAND licenses have puzzled lawyers, regulators and courts for years, and past efforts at clarification have never been very successful.  The next day, on February 8, Google released a letter[2] that it sent to the Institute for Electrical and Electronics Engineers (IEEE), ETSI and several other standards organizations.  Like Apple, Google sought to clarify its position on FRAND licensing.  And just hours after Google’s announcement, Microsoft posted a statement of “Support for Industry Standards”[3] on its web site, laying out its own gloss on FRAND licensing.  For those who were left wondering what instigated this flurry of corporate “clarification”, the answer arrived a few days later when, on February 13, the Antitrust Division of the U.S. Department of Justice (DOJ) released its decision[4] to close the investigation of three significant patent-based transactions:  the acquisition of Motorola Mobility by Google, the acquisition of a large patent portfolio formerly held by Nortel Networks by “Rockstar Bidco” (a group including Microsoft, Apple, RIM and others), and the acquisition by Apple of certain Linux-related patents formerly held by Novell.  In its decision, the DOJ noted with approval the public statements by Apple and Microsoft, while expressing some concern with Google’s FRAND approach.  The European Commission approved Google’s acquisition of Motorola Mobility on the same day.

To understand the significance of the Apple, Microsoft and Google FRAND statements, some background is in order.  The technical standards that enable our computers, mobile phones and home entertainment gear to communicate and interoperate are developed by corps of “volunteers” who get together in person and virtually under the auspices of standards-development organizations (SDOs).  These SDOs include large, international bodies such as ETSI and IEEE, as well as smaller consortia and interest groups.  The engineers who do the bulk of the work, however, are not employees of the SDOs (which are usually thinly-staffed non-profits), but of the companies who plan to sell products that implement the standards: the Apples, Googles, Motorolas and Microsofts of the world.  Should such a company obtain a patent covering the implementation of a standard, it would be able to exert significant leverage over the market for products that implemented the standard.  In particular, if a patent holder were to obtain, or even threaten to obtain, an injunction against manufacturers of competing standards-compliant products, either the standard would become far less useful, or the market would experience significant unanticipated costs.  This phenomenon is what commentators have come to call “patent hold-up”.  Due to the possibility of hold-up, most SDOs today require that participants in the standards-development process disclose their patents that are necessary to implement the standard and/or commit to license those patents on FRAND terms.

As Contreras notes, an important part of these FRAND commitments offered by Google, Motorola, and Apple related to the availability of injunctive relief (do go see the handy chart in Contreras’ post laying out the key differences in the commitments).  Contreras usefully summarizes the three statements’ positions on injunctive relief:

In their February FRAND statements, Apple and Microsoft each commit not to seek injunctions on the basis of their standards-essential patents.  Google makes a similar commitment, but qualifies it in typically lawyerly fashion (Google’s letter is more than 3 single-spaced pages in length, while Microsoft’s simple statement occupies about a quarter of a page).  In this case, Google’s careful qualifications (injunctive relief might be possible if the potential licensee does not itself agree to refrain from seeking an injunction, if licensing negotiations extended beyond a reasonable period, and the like) worked against it.  While the DOJ applauds Apple’s and Microsoft’s statements “that they will not seek to prevent or exclude rivals’ products form the market”, it views Google’s commitments as “less clear”.  The DOJ thus “continues to have concerns about the potential inappropriate use of [standards-essential patents] to disrupt competition”.

Its worth reading the DOJ’s press release on this point — specifically, that while the DOJ found that none of the three transactions itself raised competitive concerns or was substantially likely to lessen the competition, the DOJ expressed general concerns about the relationship between these firms’ market positions and ability to use the threat of injunctive relief to hold up rivals:

Apple’s and Google’s substantial share of mobile platforms makes it more likely that as the owners of additional SEPs they could hold up rivals, thus harming competition and innovation.  For example, Apple would likely benefit significantly through increased sales of its devices if it could exclude Android-based phones from the market or raise the costs of such phones through IP-licenses or patent litigation.  Google could similarly benefit by raising the costs of, or excluding, Apple devices because of the revenues it derives from Android-based devices.

The specific transactions at issue, however, are not likely to substantially lessen competition.  The evidence shows that Motorola Mobility has had a long and aggressive history of seeking to capitalize on its intellectual property and has been engaged in extended disputes with Apple, Microsoft and others.  As Google’s acquisition of Motorola Mobility is unlikely to materially alter that policy, the division concluded that transferring ownership of the patents would not substantially alter current market dynamics.  This conclusion is limited to the transfer of ownership rights and not the exercise of those transferred rights.

With respect to Apple/Novell, the division concluded that the acquisition of the patents from CPTN, formerly owned by Novell, is unlikely to harm competition.  While the patents Apple would acquire are important to the open source community and to Linux-based software in particular, the OIN, to which Novell belonged, requires its participating patent holders to offer a perpetual, royalty-free license for use in the “Linux-system.”  The division investigated whether the change in ownership would permit Apple to avoid OIN commitments and seek royalties from Linux users.  The division concluded it would not, a conclusion made easier by Apple’s commitment to honor Novell’s OIN licensing commitments.

In its analysis of the transactions, the division took into account the fact that during the pendency of these investigations, Apple, Google and Microsoft each made public statements explaining their respective SEP licensing practices.  Both Apple and Microsoft made clear that they will not seek to prevent or exclude rivals’ products from the market in exercising their SEP rights.

What’s problematic about a competition enforcement agency extracting promises not to enforce lawfully obtained property rights during merger review, outside the formal consent process, and in transactions that do not raise competitive concerns themselves?  For starters, the DOJ’s expression about competitive concerns about “hold up” obfuscate an important issue.  In Rambus the D.C. Circuit clearly held that not all forms of what the DOJ describes here as patent holdup violate the antitrust laws in the first instance.  Both appellate courts discussion patent holdup as an antitrust violation have held the patent holder must deceptively induce the SSO to adopt the patented technology.  Rambus makes clear — as I’ve discussed — that a firm with lawfully acquired monopoly power who merely raises prices does not violate the antitrust laws.  The proposition that all forms of patent holdup are antitrust violations is dubious.  For an agency to extract concessions that go beyond the scope of the antitrust laws at all, much less through merger review of transactions that do not raise competitive concerns themselves, raises serious concerns.

Here is what the DOJ says about Google’s commitment:

If adhered to in practice, these positions could significantly reduce the possibility of a hold up or use of an injunction as a threat to inhibit or preclude innovation and competition.

Google’s commitments have been less clear.  In particular, Google has stated to the IEEE and others on Feb. 8, 2012, that its policy is to refrain from seeking injunctive relief for the infringement of SEPs against a counter-party, but apparently only for disputes involving future license revenues, and only if the counterparty:  forgoes certain defenses such as challenging the validity of the patent; pays the full disputed amount into escrow; and agrees to a reciprocal process regarding injunctions.  Google’s statement therefore does not directly provide the same assurance as the other companies’ statements concerning the exercise of its newly acquired patent rights.  Nonetheless, the division determined that the acquisition of the patents by Google did not substantially lessen competition, but how Google may exercise its patents in the future remains a significant concern.

No doubt the DOJ statement is accurate and the DOJ’s concerns about patent holdup are genuine.  But that’s not the point.

The question of the appropriate role for injunctions and damages in patent infringement litigation is a complex one.  While many scholars certainly argue that the use of injunctions facilitates patent hold up and threatens innovation.  There are serious debates to be had about whether more vigorous antitrust enforcement of the contractual relationships between patent holders and standard setting organization (SSOs) would spur greater innovation.   The empirical evidence suggesting patent holdup is a pervasive problem is however, at best, quite mixed.  Further, others argue that the availability of injunctions is not only a fundamental aspect of our system of property rights, but also from an economic perspective, that the power of the injunctions facilitates efficient transacting by the parties.  For example, some contend that the power to obtain injunctive relief for infringement within the patent thicket results in a “cold war” of sorts in which the threat is sufficient to induce cross-licensing by all parties.  Surely, this is not first best.  But that isn’t the relevant question.

There are other more fundamental problems with the notion of patent holdup as an antitrust concern.  Kobayashi & Wright also raise concerns with the theoretical case for antitrust enforcement of patent holdup on several grounds.  One is that high probability of detection of patent holdup coupled with antitrust’s treble damages makes overdeterrence highly likely.  Another is that alternative remedies such as contract and the patent doctrine of equitable estoppel render the marginal benefits of antitrust enforcement trivial or negative in this context.  Froeb, Ganglmair & Werden raise similar points.   Suffice it to say that the debate on the appropriate scope of antitrust enforcement in patent holdup is ongoing as a general matter; there is certainly no consensus with regard to economic theory or empirical evidence that stripping the availability of injunctive relief from patent holders entering into contractual relationships with SSOs will enhance competition or improve consumer welfare.  It is quite possible that such an intervention would chill competition, participation in SSOs, and the efficient contracting process potentially facilitated by the availability of injunctive relief.

The policy debate I describe above is an important one.  Many of the questions at the center of that complex debate are not settled as a matter of economic theory, empirics, or law.  This post certainly has no ambitions to resolve them here; my goal is a much more modest one.  The DOJs policymaking efforts through the merger review process raise serious issues.  I would hope that all would agree — regardless of where they stand on the patent holdup debate — that the idea that these complex debates be hammered out in merger review at the DOJ because the DOJ happens to have a number of cases involving patent portfolios is a foolish one for several reasons.

First, it is unclear the DOJ could have extracted these FRAND concessions through proper merger review.  The DOJ apparently agreed that the transactions did not raise serious competitive concerns.   The pressure imposed by the DOJ upon the parties to make the commitments to the SSOs not to pursue injunctive relief as part of a FRAND commitment outside of the normal consent process raises serious concerns.  The imposition of settlement conditions far afield from the competitive consequences of the merger itself is something we do see from antitrust enforcement agencies in other countries quite frequently, but this sort of behavior burns significant reputational capital with the rest of the world when our agencies go abroad to lecture on the importance of keeping antitrust analysis consistent, predictable, and based upon the economic fundamentals of the transaction at hand.

Second, the DOJ Antitrust Division does not alone have comparative advantage in determining the optimal use of injunctions versus damages in the patent system.

Third, appearances here are quite problematic.  Given that the DOJ did not appear to have significant competitive concerns with the transactions, one can create the following narrative of events without too much creative effort: (1) the DOJ team has theoretical priors that injunctive relief is a significant competitive problem, (2) the DOJ happens to have these mergers in front of it pending review from a couple of firms likely to be repeat players in the antitrust enforcement game, (3) the DOJ asks the firms to make these concessions despite the fact that they have little to do with the conventional antitrust analysis of the transactions, under which they would have been approved without condition.

The more I think about the use of the merger review process to extract concessions from patent holders in the form of promises not to enforce property rights which they would otherwise be legally entitled to, the more the DOJ’s actions appear inappropriate.  The stakes are high here both in terms of identifying patent and competition rules that will foster rather than hamper innovation, but also with respect to compromising the integrity of merger review through the imposition of non-merger related conditions we are more akin to seeing from the FCC, states, or less well-developed antitrust regimes.