Archives For wireless

It’s fitting that FCC Chairman Ajit Pai recently compared his predecessor’s jettisoning of the FCC’s light touch framework for Internet access regulation without hard evidence to the Oklahoma City Thunder’s James Harden trade. That infamous deal broke up a young nucleus of three of the best players in the NBA in 2012 because keeping all three might someday create salary cap concerns. What few saw coming was a new TV deal in 2015 that sent the salary cap soaring.

If it’s hard to predict how the market will evolve in the closed world of professional basketball, predictions about the path of Internet innovation are an order of magnitude harder — especially for those making crucial decisions with a lot of money at stake.

The FCC’s answer for what it considered to be the dangerous unpredictability of Internet innovation was to write itself a blank check of authority to regulate ISPs in the 2015 Open Internet Order (OIO), embodied in what is referred to as the “Internet conduct standard.” This standard expanded the scope of Internet access regulation well beyond the core principle of preserving openness (i.e., ensuring that any legal content can be accessed by all users) by granting the FCC the unbounded, discretionary authority to define and address “new and novel threats to the Internet.”

When asked about what the standard meant (not long after writing it), former Chairman Tom Wheeler replied,

We don’t really know. We don’t know where things will go next. We have created a playing field where there are known rules, and the FCC will sit there as a referee and will throw the flag.

Somehow, former Chairman Wheeler would have us believe that an amorphous standard that means whatever the agency (or its Enforcement Bureau) says it means created a playing field with “known rules.” But claiming such broad authority is hardly the light-touch approach marketed to the public. Instead, this ill-conceived standard allows the FCC to wade as deeply as it chooses into how an ISP organizes its business and how it manages its network traffic.

Such an approach is destined to undermine, rather than further, the objectives of Internet openness, as embodied in Chairman Powell’s 2005 Internet Policy Statement:

To foster creation, adoption and use of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.

Instead, the Internet conduct standard is emblematic of how an off-the-rails quest to heavily regulate one specific component of the complex Internet ecosystem results in arbitrary regulatory imbalances — e.g., between ISPs and over-the-top (OTT) or edge providers that offer similar services such as video streaming or voice calling.

As Boston College Law Professor, Dan Lyons, puts it:

While many might assume that, in theory, what’s good for Netflix is good for consumers, the reality is more complex. To protect innovation at the edge of the Internet ecosystem, the Commission’s sweeping rules reduce the opportunity for consumer-friendly innovation elsewhere, namely by facilities-based broadband providers.

This is no recipe for innovation, nor does it coherently distinguish between practices that might impede competition and innovation on the Internet and those that are merely politically disfavored, for any reason or no reason at all.

Free data madness

The Internet conduct standard’s unholy combination of unfettered discretion and the impulse to micromanage can (and will) be deployed without credible justification to the detriment of consumers and innovation. Nowhere has this been more evident than in the confusion surrounding the regulation of “free data.”

Free data, like T-Mobile’s Binge On program, is data consumed by a user that has been subsidized by a mobile operator or a content provider. The vertical arrangements between operators and content providers creating the free data offerings provide many benefits to consumers, including enabling subscribers to consume more data (or, for low-income users, to consume data in the first place), facilitating product differentiation by mobile operators that offer a variety of free data plans (including allowing smaller operators the chance to get a leg up on competitors by assembling a market-share-winning plan), increasing the overall consumption of content, and reducing users’ cost of obtaining information. It’s also fundamentally about experimentation. As the International Center for Law & Economics (ICLE) recently explained:

Offering some services at subsidized or zero prices frees up resources (and, where applicable, data under a user’s data cap) enabling users to experiment with new, less-familiar alternatives. Where a user might not find it worthwhile to spend his marginal dollar on an unfamiliar or less-preferred service, differentiated pricing loosens the user’s budget constraint, and may make him more, not less, likely to use alternative services.

In December 2015 then-Chairman Tom Wheeler used his newfound discretion to launch a 13-month “inquiry” into free data practices before preliminarily finding some to be in violation of the standard. Without identifying any actual harm, Wheeler concluded that free data plans “may raise” economic and public policy issues that “may harm consumers and competition.”

After assuming the reins at the FCC, Chairman Pai swiftly put an end to that nonsense, saying that the Commission had better things to do (like removing barriers to broadband deployment) than denying free data plans that expand Internet access and are immensely popular, especially among low-income Americans.

The global morass of free data regulation

But as long as the Internet conduct standard remains on the books, it implicitly grants the US’s imprimatur to harmful policies and regulatory capriciousness in other countries that look to the US for persuasive authority. While Chairman Pai’s decisive intervention resolved the free data debate in the US (at least for now), other countries are still grappling with whether to prohibit the practice, allow it, or allow it with various restrictions.

In Europe, the 2016 EC guidelines left the decision of whether to allow the practice in the hands of national regulators. Consequently, some regulators — in Hungary, Sweden, and the Netherlands (although there the ban was recently overturned in court) — have banned free data practices  while others — in Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine — have not. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs, a state of affairs that is compounded by a lack of data on the consequences of various approaches to their regulation.

In Canada this year, the CRTC issued a decision adopting restrictive criteria under which to evaluate free data plans. The criteria include assessing the degree to which the treatment of data is agnostic, whether the free data offer is exclusive to certain customers or certain content providers, the impact on Internet openness and innovation, and whether there is financial compensation involved. The standard is open-ended, and free data plans as they are offered in the US would “likely raise concerns.”

Other regulators are contributing to the confusion through ambiguously framed rules, such as that of the Chilean regulator, Subtel. In a 2014 decision, it found that a free data offer of specific social network apps was in breach of Chile’s Internet rules. In contrast to what is commonly reported, however, Subtel did not ban free data. Instead, it required mobile operators to change how they promote such services, requiring them to state that access to Facebook, Twitter and WhatsApp were offered “without discounting the user’s balance” instead of “at no cost.” It also required them to disclose the amount of time the offer would be available, but imposed no mandatory limit.

In addition to this confusing regulatory make-work governing how operators market free data plans, the Chilean measures also require that mobile operators offer free data to subscribers who pay for a data plan, in order to ensure free data isn’t the only option users have to access the Internet.

The result is that in Chile today free data plans are widely offered by Movistar, Claro, and Entel and include access to apps such as Facebook, WhatsApp, Twitter, Instagram, Pokemon Go, Waze, Snapchat, Apple Music, Spotify, Netflix or YouTube — even though Subtel has nominally declared such plans to be in violation of Chile’s net neutrality rules.

Other regulators are searching for palatable alternatives to both flex their regulatory muscle to govern Internet access, while simultaneously making free data work. The Indian regulator, TRAI, famously banned free data in February 2016. But the story doesn’t end there. After seeing the potential value of free data in unserved and underserved, low-income areas, TRAI proposed implementing government-sanctioned free data. The proposed scheme would provide rural subscribers with 100 MB of free data per month, funded through the country’s universal service fund. To ensure that there would be no vertical agreements between content providers and mobile operators, TRAI recommended introducing third parties, referred to as “aggregators,” that would facilitate mobile-operator-agnostic arrangements.

The result is a nonsensical, if vaguely well-intentioned, threading of the needle between the perceived need to (over-)regulate access providers and the determination to expand access. Notwithstanding the Indian government’s awareness that free data will help to close the digital divide and enhance Internet access, in other words, it nonetheless banned private markets from employing private capital to achieve that very result, preferring instead non-market processes which are unlikely to be nearly as nimble or as effective — and yet still ultimately offer “non-neutral” options for consumers.

Thinking globally, acting locally (by ditching the Internet conduct standard)

Where it is permitted, free data is undergoing explosive adoption among mobile operators. Currently in the US, for example, all major mobile operators offer some form of free data or unlimited plan to subscribers. And, as a result, free data is proving itself as a business model for users’ early stage experimentation and adoption of augmented reality, virtual reality and other cutting-edge technologies that represent the Internet’s next wave — but that also use vast amounts of data. Were the US to cut off free data at the legs under the OIO absent hard evidence of harm, it would substantially undermine this innovation.

The application of the nebulous Internet conduct standard to free data is a microcosm of the current incoherence: It is a rule rife with a parade of uncertainties and only theoretical problems, needlessly saddling companies with enforcement risk, all in the name of preserving and promoting innovation and openness. As even some of the staunchest proponents of net neutrality have recognized, only companies that can afford years of litigation can be expected to thrive in such an environment.

In the face of confusion and uncertainty globally, the US is now poised to provide leadership grounded in sound policy that promotes innovation. As ICLE noted last month, Chairman Pai took a crucial step toward re-imposing economic rigor and the rule of law at the FCC by questioning the unprecedented and ill-supported expansion of FCC authority that undergirds the OIO in general and the Internet conduct standard in particular. Today the agency will take the next step by voting on Chairman Pai’s proposed rulemaking. Wherever the new proceeding leads, it’s a welcome opportunity to analyze the issues with a degree of rigor that has thus far been appallingly absent.

And we should not forget that there’s a direct solution to these ambiguities that would avoid the undulations of subsequent FCC policy fights: Congress could (and should) pass legislation implementing a regulatory framework grounded in sound economics and empirical evidence that allows for consumers to benefit from the vast number of procompetitive vertical agreements (such as free data plans), while still facilitating a means for policing conduct that may actually harm consumers.

The Golden State Warriors are the heavy odds-on favorite to win another NBA Championship this summer, led by former OKC player Kevin Durant. And James Harden is a contender for league MVP. We can’t always turn back the clock on a terrible decision, hastily made before enough evidence has been gathered, but Chairman Pai’s efforts present a rare opportunity to do so.

Netflix’s latest net neutrality hypocrisy (yes, there have been others. See here and here, for example) involves its long-term, undisclosed throttling of its video traffic on AT&T’s and Verizon’s wireless networks, while it lobbied heavily for net neutrality rules from the FCC that would prevent just such throttling by ISPs.

It was Netflix that coined the term “strong net neutrality,” in an effort to import interconnection (the connections between ISPs and edge provider networks) into the net neutrality fold. That alone was a bastardization of what net neutrality purportedly stood for, as I previously noted:

There is a reason every iteration of the FCC’s net neutrality rules, including the latest, have explicitly not applied to backbone interconnection agreements: Interconnection over the backbone has always been open and competitive, and it simply doesn’t give rise to the kind of discrimination concerns net neutrality is meant to address.

That Netflix would prefer not to pay for delivery of its content isn’t surprising. But net neutrality regulations don’t — and shouldn’t — have anything to do with it.

But Netflix did something else with “strong net neutrality.” It tied it to consumer choice:

This weak net neutrality isn’t enough to protect an open, competitive Internet; a stronger form of net neutrality is required. Strong net neutrality additionally prevents ISPs from charging a toll for interconnection to services like Netflix, YouTube, or Skype, or intermediaries such as Cogent, Akamai or Level 3, to deliver the services and data requested by ISP residential subscribers. Instead, they must provide sufficient access to their network without charge. (Emphasis added).

A focus on consumers is laudable, of course, but when the focus is on consumers there’s no reason to differentiate between ISPs (to whom net neutrality rules apply) and content providers entering into contracts with ISPs to deliver their content (to whom net neutrality rules don’t apply).

And Netflix has just showed us exactly why that’s the case.

Netflix can and does engage in management of its streams in order (presumably) to optimize consumer experience as users move between networks, devices and viewers (e.g., native apps vs Internet browser windows) with very different characteristics and limitations. That’s all well and good. But as we noted in our Policy Comments in the FCC’s Open Internet Order proceeding,

In this circumstance, particularly when the content in question is Netflix, with 30% of network traffic, both the network’s and the content provider’s transmission decisions may be determinative of network quality, as may the users’ device and application choices.

As a 2011 paper by a group of network engineers studying the network characteristics of video streaming data from Netflix and YouTube noted:

This is a concern as it means that a sudden change of application or container in a large population might have a significant impact on the network traffic. Considering the very fast changes in trends this is a real possibility, the most likely being a change from Flash to HTML5 along with an increase in the use of mobile devices…. [S]treaming videos at high resolutions can result in smoother aggregate traffic while at the same time linearly increase the aggregate data rate due to video streaming.

Again, a concern with consumers is admirable, but Netflix isn’t concerned with consumers. It’s concerned at most with consumers of Netflix, while they are consuming Netflix. But the reality is that Netflix’s content management decisions can adversely affect consumers overall, including its own subscribers when they aren’t watching Netflix.

And here’s the huge irony. The FCC’s net neutrality rules are tailor-made to guarantee that Netflix will never have any incentive to take these externalities into account in its own decisions. What’s more, they ensure that ISPs are severely hamstrung in managing their networks for the benefit of all consumers, not least because their interconnection deals with large content providers like Netflix are now being closely scrutinized.

It’s great that Netflix thinks it should manage its video delivery to optimize viewing under different network conditions. But net neutrality rules ensure that Netflix bears no cost for overwhelming the network in the process. Essentially, short of building new capacity — at great expense to all ISP subscribers, of course — ISPs can’t do much about it, either, under the rules. And, of course, the rules also make it impossible for ISPs to negotiate for financial help from Netflix (or its heaviest users) in paying for those upgrades.

On top of this, net neutrality advocates have taken aim at usage-based billing and other pricing practices that would help with the problem by enabling ISPs to charge their heaviest users more in order to alleviate the inherent subsidy by normal users that flat-rate billing entails. (Netflix itself, as one of the articles linked above discusses at length, is hypocritically inconsistent on this score).

As we also noted in our OIO Policy Comments:

The idea that consumers and competition generally are better off when content providers face no incentive to take account of congestion externalities in their pricing (or when users have no incentive to take account of their own usage) runs counter to basic economic logic and is unsupported by the evidence. In fact, contrary to such claims, usage-based pricing, congestion pricing and sponsored content, among other nonlinear pricing models, would, in many circumstances, further incentivize networks to expand capacity (not create artificial scarcity).

Some concern for consumers. Under Netflix’s approach consumers get it coming and going: Either their non-Netflix traffic is compromised for the sake of Netflix’s traffic, or they have to pay higher subscription fees to ISPs for the privilege of accommodating Netflix’s ever-expanding traffic loads (4K videos, anyone?) — whether they ever use Netflix or not.

Sometimes, apparently, Netflix throttles its own traffic in order to “help” a few consumers. (That it does so without disclosing the practice is pretty galling, especially given the enhanced transparency rules in the Open Internet Order — something Netflix also advocated for, and which also apply only to ISPs and not to content providers). But its self-aggrandizing advocacy for the FCC’s latest net neutrality rules reveals that its first priority is to screw over consumers, so long as it can shift the blame and the cost to others.

It’s easy to look at the net neutrality debate and assume that everyone is acting in their self-interest and against consumer welfare. Thus, many on the left denounce all opposition to Title II as essentially “Comcast-funded,” aimed at undermining the Open Internet to further nefarious, hidden agendas. No matter how often opponents make the economic argument that Title II would reduce incentives to invest in the network, many will not listen because they have convinced themselves that it is simply special-interest pleading.

But whatever you think of ISPs’ incentives to oppose Title II, the incentive for the tech companies (like Cisco, Qualcomm, Nokia and IBM) that design and build key elements of network infrastructure and the devices that connect to it (i.e., essential input providers) is to build out networks and increase adoption (i.e., to expand output). These companies’ fundamental incentive with respect to regulation of the Internet is the adoption of rules that favor investment. They operate in highly competitive markets, they don’t offer competing content and they don’t stand as alleged “gatekeepers” seeking monopoly returns from, or control over, what crosses over the Interwebs.

Thus, it is no small thing that 60 tech companies — including some of the world’s largest, based both in the US and abroad — that are heavily invested in the buildout of networks and devices, as well as more than 100 manufacturing firms that are increasingly building the products and devices that make up the “Internet of Things,” have written letters strongly opposing the reclassification of broadband under Title II.

There is probably no more objective evidence that Title II reclassification will harm broadband deployment than the opposition of these informed market participants.

These companies have the most to lose from reduced buildout, and no reasonable nefarious plots can be constructed to impugn their opposition to reclassification as consumer-harming self-interest in disguise. Their self-interest is on their sleeves: More broadband deployment and adoption — which is exactly what the Open Internet proceedings are supposed to accomplish.

If the FCC chooses the reclassification route, it will most assuredly end up in litigation. And when it does, the opposition of these companies to Title II should be Exhibit A in the effort to debunk the FCC’s purported basis for its rules: the “virtuous circle” theory that says that strong net neutrality rules are necessary to drive broadband investment and deployment.

Access to all the wonderful content the Internet has brought us is not possible without the billions of dollars that have been invested in building the networks and devices themselves. Let’s not kill the goose that lays the golden eggs.

Today the D.C. Circuit struck down most of the FCC’s 2010 Open Internet Order, rejecting rules that required broadband providers to carry all traffic for edge providers (“anti-blocking”) and prevented providers from negotiating deals for prioritized carriage. However, the appeals court did conclude that the FCC has statutory authority to issue “Net Neutrality” rules under Section 706(a) and let stand the FCC’s requirement that broadband providers clearly disclose their network management practices.

The following statement may be attributed to Geoffrey Manne and Berin Szoka:

The FCC may have lost today’s battle, but it just won the war over regulating the Internet. By recognizing Section 706 as an independent grant of statutory authority, the court has given the FCC near limitless power to regulate not just broadband, but the Internet itself, as Judge Silberman recognized in his dissent.

The court left the door open for the FCC to write new Net Neutrality rules, provided the Commission doesn’t treat broadband providers as common carriers. This means that, even without reclassifying broadband as a Title II service, the FCC could require that any deals between broadband and content providers be reasonable and non-discriminatory, just as it has required wireless carriers to provide data roaming services to their competitors’ customers on that basis. In principle, this might be a sound approach, if the rule resembles antitrust standards. But even that limitation could easily be evaded if the FCC regulates through case-by-case enforcement actions, as it tried to do before issuing the Open Internet Order. Either way, the FCC need only make a colorable argument under Section 706 that its actions are designed to “encourage the deployment… of advanced telecommunications services.” If the FCC’s tenuous “triple cushion shot” argument could satisfy that test, there is little limit to the deference the FCC will receive.

But that’s just for Net Neutrality. Section 706 covers “advanced telecommunications,” which seems to include any information service, from broadband to the interconnectivity of smart appliances like washing machines and home thermostats. If the court’s ruling on Section 706 is really as broad as it sounds, and as the dissent fears, the FCC just acquired wide authority over these, as well — in short, the entire Internet, including the “Internet of Things.” While the court’s “no common carrier rules” limitation is a real one, the FCC clearly just gained enormous power that it didn’t have before today’s ruling.

Today’s decision essentially rewrites the Communications Act in a way that will, ironically, do the opposite of what the FCC claims: hurt, not help, deployment of new Internet services. Whatever the FCC’s role ought to be, such decisions should be up to our elected representatives, not three unelected FCC Commissioners. So if there’s a silver lining in any of this, it may be that the true implications of today’s decision are so radical that Congress finally writes a new Communications Act — a long-overdue process Congressmen Fred Upton and Greg Walden have recently begun.

Szoka and Manne are available for comment at Find/share this release on Facebook or Twitter.

For those in the DC area interested in telecom regulation, there is another great event opportunity coming up next week.

Join TechFreedom on Thursday, December 19, the 100th anniversary of the Kingsbury Commitment, AT&T’s negotiated settlement of antitrust charges brought by the Department of Justice that gave AT&T a legal monopoly in most of the U.S. in exchange for a commitment to provide universal service.

The Commitment is hailed by many not just as a milestone in the public interest but as the bedrock of U.S. communications policy. Others see the settlement as the cynical exploitation of lofty rhetoric to establish a tightly regulated monopoly — and the beginning of decades of cozy regulatory capture that stifled competition and strangled innovation.

So which was it? More importantly, what can we learn from the seventy year period before the 1984 break-up of AT&T, and the last three decades of efforts to unleash competition? With fewer than a third of Americans relying on traditional telephony and Internet-based competitors increasingly driving competition, what does universal service mean in the digital era? As Congress contemplates overhauling the Communications Act, how can policymakers promote universal service through competition, by promoting innovation and investment? What should a new Kingsbury Commitment look like?

Following a luncheon keynote address by FCC Commissioner Ajit Pai, a diverse panel of experts moderated by TechFreedom President Berin Szoka will explore these issues and more. The panel includes:

  • Harold Feld, Public Knowledge
  • Rob Atkinson, Information Technology & Innovation Foundation
  • Hance Haney, Discovery Institute
  • Jeff Eisenach, American Enterprise Institute
  • Fred Campbell, Former FCC Commissioner

Space is limited so RSVP now if you plan to attend in person. A live stream of the event will be available on this page. You can follow the conversation on Twitter on the #Kingsbury100 hashtag.

Thursday, December 19, 2013
11:30 – 12:00 Registration & lunch
12:00 – 1:45 Event & live stream

The live stream will begin on this page at noon Eastern.

The Methodist Building
100 Maryland Ave NE
Washington D.C. 20002


I have a new post up at, excerpted below, in which I discuss the growing body of (surprising uncontroversial) work showing that broadband in the US compares favorably to that in the rest of the world. My conclusion, which is frankly more cynical than I like, is that concern about the US “falling behind” is manufactured debate. It’s a compelling story that the media likes and that plays well for (some) academics.

Before the excerpt, I’d also like to quote one of today’s headlines from Slashdot:

“Google launched the citywide Wi-Fi network with much fanfare in 2006 as a way for Mountain View residents and businesses to connect to the Internet at no cost. It covers most of the Silicon Valley city and worked well until last year, as Slashdot readers may recall, when connectivity got rapidly worse. As a result, Mountain View is installing new Wi-Fi hotspots in parts of the city to supplement the poorly performing network operated by Google. Both the city and Google have blamed the problems on the design of the network. Google, which is involved in several projects to provide Internet access in various parts of the world, said in a statement that it is ‘actively in discussions with the Mountain View city staff to review several options for the future of the network.'”

The added emphasis is mine. It is added to draw attention to the simple point that designing and building networks is hard. Like, really really hard. Folks think that it’s easy, because they have small networks in their homes or offices — so surely they can scale to a nationwide network without much trouble. But all sorts of crazy stuff starts to happen when we substantially increase the scale of IP networks. This is just one of the very many things that should give us pause about calls for the buildout of a government run or sponsored Internet infrastructure.

Another of those things is whether there’s any need for that. Which brings us to my post:

In the week or so since TPRC, I’ve found myself dwelling on an observation I made during the conference: how much agreement there was, especially on issues usually thought of as controversial. I want to take a few paragraphs to consider what was probably the most surprisingly non-controversial panel of the conference, the final Internet Policy panel, in which two papers – one by ITIF’s Rob Atkinson and the other by James McConnaughey from NTIA – were presented that showed that broadband Internet service in US (and Canada, though I will focus on the US) compares quite well to that offered in the rest of the world. […]

But the real question that this panel raised for me was: given how well the US actually compares to other countries, why does concern about the US falling behind dominate so much discourse in this area? When you get technical, economic, legal, and policy experts together in a room – which is what TPRC does – the near consensus seems to be that the “kids are all right”; but when you read the press, or much of the high-profile academic literature, “the sky is falling.”

The gap between these assessments could not be larger. I think that we need to think about why this is. I hate to be cynical or disparaging – especially since I know strong advocates on both sides and believe that their concerns are sincere and efforts earnest. But after this year’s conference, I’m having trouble shaking the feeling that ongoing concern about how US broadband stacks up to the rest of the world is a manufactured debate. It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. […]

Compare this to the Chicken Little narrative. As I was writing this, I received a message from a friend asking my views on an Economist blog post that shares data from the ITU’s just-released Measuring the Information Society 2013 report. This data shows that the US has some of the highest prices for pre-paid handset-based mobile data around the world. That is, it reports the standard narrative – and it does so without looking at the report’s methodology. […]

Even more problematic than what the Economist blog reports, however, is what it doesn’t report. [The report contains data showing the US has some of the lowest cost fixed broadband and mobile broadband prices in the world. See the full post at for the numbers.]

Now, there are possible methodological problems with these rankings, too. My point here isn’t to debate over the relative position of the United States. It’s to ask why the “story” about this report cherry-picks the alarming data, doesn’t consider its methodology, and ignores the data that contradicts its story.

Of course, I answered that question above: It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. Manufacturing debate sells copy and ads, and advances careers.

Susan Crawford recently received the OneCommunity Broadband Hero Award for being a “tireless advocate for 21st century high capacity network access.” In her recent debate with Geoffrey Manne and Berin Szoka, she emphasized that there is little competition in broadband or between cable broadband and wireless, asserting that the main players have effectively divided the markets. As a result, she argues (as she did here at 17:29) that broadband and wireless providers “are deciding not to invest in the very expensive infrastructure because they are very happy with the profits they are getting now.” In the debate, Manne countered by pointing to substantial investment and innovation in both the wired and wireless broadband marketplaces, and arguing that this is not something monopolists insulated from competition do. So, who’s right?

The recently released 2013 Progressive Policy Institute Report, U.S. Investment Heroes of 2013: The Companies Betting on America’s Future, has two useful little tables that lend support to Manne’s counterargument.


The first shows the top 25 investors that are nonfinancial companies, and guess who comes in 1st, 2nd, 10th, 13th, and 17th place? None other than AT&T, Verizon Communications, Comcast, Sprint Nextel, and Time Warner, respectively.


And when the table is adjusted by removing non-energy companies, the ranks become 1st, 2nd, 5th, 6th, and 9th. In fact, cable and telecom combined to invest over $50.5 billion in 2012.

This high level of investment by supposed monopolists is not a new development. The Progressive Policy Institute’s 2012 Report, Investment Heroes: Who’s Betting on America’s Future? indicates that the same main players have been investing heavily for years. Since 1996, the cable industry has invested over $200 billion into infrastructure alone. These investments have allowed 99.5% of Americans to have access to broadband – via landline, wireless, or both – as of the end of 2012.

There’s more. Not only has there been substantial investment that has increased access, but the speeds of service have increased dramatically over the past few years. The National Broadband Map data show that by the end of 2012:

  • Landline service ≧ 25 megabits per second download available to 81.7% of households, up from 72.9% at the end of 2011 and 58.4% at the end of 2010
  • Landline service ≧ 100 megabits per second download available to 51.5% of households, up from 43.4% at the end of 2011 and only 12.9% at the end of 2010
  • ≧ 1 gigabit per second download available to 6.8% of households, predominantly via fiber
  • Fiber at any speed was available to 22.9% of households, up from 16.8% at the end of 2011 and 14.8% at the end of 2010
  • Landline broadband service at the 3 megabits / 768 kilobits threshold available to 93.4% of households, up from 92.8% at the end of 2011
  • Mobile wireless broadband at the 3 megabits / 768 kilobits threshold available to 94.1% of households , up from 75.8% at the end of 2011
  • Access to mobile wireless broadband providing ≧ 10 megabits per second download has grown to 87%, up from 70.6 percent at the end of 2011 and 8.9 percent at the end of 2010
  • Landline broadband ≧ 10 megabits download was available to 91.1% of households

This leaves only one question: Will the real broadband heroes please stand up?

On Tuesday the European Commission opened formal proceedings against Motorola Mobility based on its patent licensing practices surrounding some of its core cellular telephony, Internet video and Wi-fi technology. The Commission’s concerns, echoing those raised by Microsoft and Apple, center on Motorola’s allegedly high royalty rates and its efforts to use injunctions to enforce the “standards-essential patents” at issue.

As it happens, this development is just the latest, like so many in the tech world these days, in Microsoft’s ongoing regulatory, policy and legal war against Google, which announced in August it was planning to buy Motorola.

Microsoft’s claim and the Commission’s concern that Motorola’s royalty offer was, in Microsoft’s colorful phrase, “so over-reaching that no rational company could ever have accepted it or even viewed it as a legitimate offer,” is misplaced. Motorola is seeking a royalty rate for its patents that is seemingly in line with customary rates.

In fact, Microsoft’s claim that Motorola’s royalty ask is extraordinary is refuted by its own conduct. As one commentator notes:

Microsoft complained that it might have to pay a tribute of up to $22.50 for every $1,000 laptop sold, and suggested that it might be fairer to pay just a few cents. This is the firm that is thought to make $10 to $15 from every $500 Android device that is sold, and for a raft of trivial software patents, not standard essential ones.

Seemingly forgetting this, Microsoft criticizes Motorola’s royalty ask on its 50 H.264 video codec patents by comparing it to the amount Microsoft pays for more than 2000 other patents in the video codec’s patent pool, claiming that the former would cost it $4 billion while the latter costs it only $6.5 million. But this is comparing apples and oranges. It is not surprising to find some patents worth orders of magnitude more than others and to find that license rates are a complicated function of the contracting parties’ particular negotiating positions and circumstances. It is no more inherently inappropriate for Microsoft to rake in 2-3% of the price of every Nook Barnes and Nobles sells than it is for Motorola to net 2.25% of the price of each Windows-operated computer sold – which is the royalty rate Motorola is seeking and which Microsoft wants declared anticompetitive out of hand.

It’s not clear how much negotiation, if any, has taken place between the companies over the terms of Microsoft’s licensing of Motorola’s patents, but what is clear is that Microsoft’s complaint, echoed by the EC, is based on the size of Motorola’s initial royalty demand and its use of a legal injunction to enforce its patent rights. Unfortunately, neither of these is particularly problematic, especially in an environment where companies like Microsoft and Apple aggressively wield exactly such tools to gain a competitive negotiating edge over their own competitors.

The court adjudicating this dispute in the ongoing litigation in U.S. district court in Washington has thus far agreed. The court denied Microsoft’s request for summary judgment that Motorola’s royalty demand violated its RAND commitment, noting its disagreement with Microsoft’s claim that “it is always facially unreasonable for a proposed royalty rate to result in a larger royalty payment for products that have higher end prices. Indeed, Motorola has previously entered into licensing agreements for its declared-essential patents at royalty rates similar to those offered to Microsoft and with royalty rates based on the price of the end product.”

The staggering aggregate numbers touted by Microsoft in its complaint and repeated by bloggers and journalists the world over are not a function of Motorola seeking an exorbitant royalty but rather a function Microsoft’s selling a lot of operating systems and earning a lot of revenue doing it. While the aggregate number ($4 billion, according to Microsoft) is huge, it is, as the court notes, based on a royalty rate that is in line with similar agreements.

The court also takes issue with Microsoft’s contention that the mere offer of allegedly unreasonable terms constitutes a breach of Motorola’s RAND commitment to license its patents on commercially reasonable terms. Quite sensibly, the court notes:

[T]he court is mindful that at the time of an initial offer, it is difficult for the offeror to know what would in fact constitute RAND terms for the offeree. Thus, what may appear to be RAND terms from the offeror’s perspective may be rejected out-of-pocket as non-RAND terms by the offeree. Indeed, it would appear that at any point in the negotiation process, the parties may have a genuine disagreement as to what terms and conditions of a license constitute RAND under the parties’ unique circumstances.

Resolution of such an impasse may ultimately fall to the courts. Thus the royalty rate issue is in fact closely related to the second issue raised by the EC’s investigation: the use or threat of injunction to enforce standards-essential patents.

While some scholars and many policy advocates claim that injunctions in the standards context raise the specter of costly hold-ups (patent holders extracting not only the market value of their patent, but also a portion of the costs that the infringer would incur if it had to implement its technology without the patent), there is no empirical evidence supporting the claim that patent holdup is a pervasive problem.

And the theory doesn’t comfortably support such a claim, either. Motorola, for example, has no interest in actually enforcing an injunction: Doing so is expensive and, notably, not nearly as good for the bottom line as actually receiving royalties from an agreed-upon contract. Instead, injunctions are, just like the more-attenuated liability suit for patent infringement, a central aspect of our intellectual property system, the means by which innovators and their financiers can reasonably expect a return on their substantial up-front investments in technology development.

Moreover, and apparently unbeknownst to those who claim that injunctions are the antithesis of negotiated solutions to licensing contests, the threat of injunction actually facilitates efficient transacting. Injunctions provide clearer penalties than damage awards for failing to reach consensus and are thus better at getting both parties on to the table with matched expectations. And this is especially true in the standards-setting context where the relevant parties are generally repeat players and where they very often have both patents to license and the need to license patents from the standard—both of which help to induce everyone to come to the table, lest they find themselves closed off from patents essential to their own products.

Antitrust intervention in standard setting negotiations based on an allegedly high initial royalty rate offer or the use of an injunction to enforce a patent is misdirected and costly. One of the clearest statements of the need for antitrust restraint in the standard setting context comes from a June 2011 comment filed with the FTC:

[T]he existence of a RAND commitment to offer patent licenses should not preclude a patent holder from seeking preliminary injunctive relief. . . . Any uniform declaration that such relief would not be available if the patent holder has made a commitment to offer a RAND license for its essential patent claims in connection with a standard may reduce any incentives that implementers might have to engage in good faith negotiations with the patent holder.

Most of the SSOs and their stakeholders that have considered these proposals over the years have determined that there are only a limited number of situations where patent hold-up takes place in the context of standards-setting. The industry has determined that those situations generally are best addressed through bi-lateral negotiation (and, in rare cases, litigation) as opposed to modifying the SSO’s IPR policy [by precluding injunctions or mandating a particular negotiation process].

The statement’s author? Why, Microsoft, of course.

Patents are an important tool for encouraging the development and commercialization of advanced technology, as are standard setting organizations. Antitrust authorities should exercise great restraint before intervening in the complex commercial negotiations over technology patents and standards. In Motorola’s case, the evidence of conduct that might harm competition is absent, and all that remains are, in essence, allegations that Motorola is bargaining hard and enforcing its property rights. The EC should let competition run its course.

The DOJ’s recent press release on the Google/Motorola, Rockstar Bidco, and Apple/ Novell transactions struck me as a bit odd when I read it.  As I’ve now had a bit of time to digest it, I’ve grown to really dislike it.  For those who have not followed Jorge Contreras had an excellent summary of events at Patently-O.

For those of us who have been following the telecom patent battles, something remarkable happened a couple of weeks ago.  On February 7, the Wall St. Journal reported that, back in November, Apple sent a letter[1] to the European Telecommunications Standards Institute (ETSI) setting forth Apple’s position regarding its commitment to license patents essential to ETSI standards.  In particular, Apple’s letter clarified its interpretation of the so-called “FRAND” (fair, reasonable and non-discriminatory) licensing terms that ETSI participants are required to use when licensing standards-essential patents.  As one might imagine, the actual scope and contours of FRAND licenses have puzzled lawyers, regulators and courts for years, and past efforts at clarification have never been very successful.  The next day, on February 8, Google released a letter[2] that it sent to the Institute for Electrical and Electronics Engineers (IEEE), ETSI and several other standards organizations.  Like Apple, Google sought to clarify its position on FRAND licensing.  And just hours after Google’s announcement, Microsoft posted a statement of “Support for Industry Standards”[3] on its web site, laying out its own gloss on FRAND licensing.  For those who were left wondering what instigated this flurry of corporate “clarification”, the answer arrived a few days later when, on February 13, the Antitrust Division of the U.S. Department of Justice (DOJ) released its decision[4] to close the investigation of three significant patent-based transactions:  the acquisition of Motorola Mobility by Google, the acquisition of a large patent portfolio formerly held by Nortel Networks by “Rockstar Bidco” (a group including Microsoft, Apple, RIM and others), and the acquisition by Apple of certain Linux-related patents formerly held by Novell.  In its decision, the DOJ noted with approval the public statements by Apple and Microsoft, while expressing some concern with Google’s FRAND approach.  The European Commission approved Google’s acquisition of Motorola Mobility on the same day.

To understand the significance of the Apple, Microsoft and Google FRAND statements, some background is in order.  The technical standards that enable our computers, mobile phones and home entertainment gear to communicate and interoperate are developed by corps of “volunteers” who get together in person and virtually under the auspices of standards-development organizations (SDOs).  These SDOs include large, international bodies such as ETSI and IEEE, as well as smaller consortia and interest groups.  The engineers who do the bulk of the work, however, are not employees of the SDOs (which are usually thinly-staffed non-profits), but of the companies who plan to sell products that implement the standards: the Apples, Googles, Motorolas and Microsofts of the world.  Should such a company obtain a patent covering the implementation of a standard, it would be able to exert significant leverage over the market for products that implemented the standard.  In particular, if a patent holder were to obtain, or even threaten to obtain, an injunction against manufacturers of competing standards-compliant products, either the standard would become far less useful, or the market would experience significant unanticipated costs.  This phenomenon is what commentators have come to call “patent hold-up”.  Due to the possibility of hold-up, most SDOs today require that participants in the standards-development process disclose their patents that are necessary to implement the standard and/or commit to license those patents on FRAND terms.

As Contreras notes, an important part of these FRAND commitments offered by Google, Motorola, and Apple related to the availability of injunctive relief (do go see the handy chart in Contreras’ post laying out the key differences in the commitments).  Contreras usefully summarizes the three statements’ positions on injunctive relief:

In their February FRAND statements, Apple and Microsoft each commit not to seek injunctions on the basis of their standards-essential patents.  Google makes a similar commitment, but qualifies it in typically lawyerly fashion (Google’s letter is more than 3 single-spaced pages in length, while Microsoft’s simple statement occupies about a quarter of a page).  In this case, Google’s careful qualifications (injunctive relief might be possible if the potential licensee does not itself agree to refrain from seeking an injunction, if licensing negotiations extended beyond a reasonable period, and the like) worked against it.  While the DOJ applauds Apple’s and Microsoft’s statements “that they will not seek to prevent or exclude rivals’ products form the market”, it views Google’s commitments as “less clear”.  The DOJ thus “continues to have concerns about the potential inappropriate use of [standards-essential patents] to disrupt competition”.

Its worth reading the DOJ’s press release on this point — specifically, that while the DOJ found that none of the three transactions itself raised competitive concerns or was substantially likely to lessen the competition, the DOJ expressed general concerns about the relationship between these firms’ market positions and ability to use the threat of injunctive relief to hold up rivals:

Apple’s and Google’s substantial share of mobile platforms makes it more likely that as the owners of additional SEPs they could hold up rivals, thus harming competition and innovation.  For example, Apple would likely benefit significantly through increased sales of its devices if it could exclude Android-based phones from the market or raise the costs of such phones through IP-licenses or patent litigation.  Google could similarly benefit by raising the costs of, or excluding, Apple devices because of the revenues it derives from Android-based devices.

The specific transactions at issue, however, are not likely to substantially lessen competition.  The evidence shows that Motorola Mobility has had a long and aggressive history of seeking to capitalize on its intellectual property and has been engaged in extended disputes with Apple, Microsoft and others.  As Google’s acquisition of Motorola Mobility is unlikely to materially alter that policy, the division concluded that transferring ownership of the patents would not substantially alter current market dynamics.  This conclusion is limited to the transfer of ownership rights and not the exercise of those transferred rights.

With respect to Apple/Novell, the division concluded that the acquisition of the patents from CPTN, formerly owned by Novell, is unlikely to harm competition.  While the patents Apple would acquire are important to the open source community and to Linux-based software in particular, the OIN, to which Novell belonged, requires its participating patent holders to offer a perpetual, royalty-free license for use in the “Linux-system.”  The division investigated whether the change in ownership would permit Apple to avoid OIN commitments and seek royalties from Linux users.  The division concluded it would not, a conclusion made easier by Apple’s commitment to honor Novell’s OIN licensing commitments.

In its analysis of the transactions, the division took into account the fact that during the pendency of these investigations, Apple, Google and Microsoft each made public statements explaining their respective SEP licensing practices.  Both Apple and Microsoft made clear that they will not seek to prevent or exclude rivals’ products from the market in exercising their SEP rights.

What’s problematic about a competition enforcement agency extracting promises not to enforce lawfully obtained property rights during merger review, outside the formal consent process, and in transactions that do not raise competitive concerns themselves?  For starters, the DOJ’s expression about competitive concerns about “hold up” obfuscate an important issue.  In Rambus the D.C. Circuit clearly held that not all forms of what the DOJ describes here as patent holdup violate the antitrust laws in the first instance.  Both appellate courts discussion patent holdup as an antitrust violation have held the patent holder must deceptively induce the SSO to adopt the patented technology.  Rambus makes clear — as I’ve discussed — that a firm with lawfully acquired monopoly power who merely raises prices does not violate the antitrust laws.  The proposition that all forms of patent holdup are antitrust violations is dubious.  For an agency to extract concessions that go beyond the scope of the antitrust laws at all, much less through merger review of transactions that do not raise competitive concerns themselves, raises serious concerns.

Here is what the DOJ says about Google’s commitment:

If adhered to in practice, these positions could significantly reduce the possibility of a hold up or use of an injunction as a threat to inhibit or preclude innovation and competition.

Google’s commitments have been less clear.  In particular, Google has stated to the IEEE and others on Feb. 8, 2012, that its policy is to refrain from seeking injunctive relief for the infringement of SEPs against a counter-party, but apparently only for disputes involving future license revenues, and only if the counterparty:  forgoes certain defenses such as challenging the validity of the patent; pays the full disputed amount into escrow; and agrees to a reciprocal process regarding injunctions.  Google’s statement therefore does not directly provide the same assurance as the other companies’ statements concerning the exercise of its newly acquired patent rights.  Nonetheless, the division determined that the acquisition of the patents by Google did not substantially lessen competition, but how Google may exercise its patents in the future remains a significant concern.

No doubt the DOJ statement is accurate and the DOJ’s concerns about patent holdup are genuine.  But that’s not the point.

The question of the appropriate role for injunctions and damages in patent infringement litigation is a complex one.  While many scholars certainly argue that the use of injunctions facilitates patent hold up and threatens innovation.  There are serious debates to be had about whether more vigorous antitrust enforcement of the contractual relationships between patent holders and standard setting organization (SSOs) would spur greater innovation.   The empirical evidence suggesting patent holdup is a pervasive problem is however, at best, quite mixed.  Further, others argue that the availability of injunctions is not only a fundamental aspect of our system of property rights, but also from an economic perspective, that the power of the injunctions facilitates efficient transacting by the parties.  For example, some contend that the power to obtain injunctive relief for infringement within the patent thicket results in a “cold war” of sorts in which the threat is sufficient to induce cross-licensing by all parties.  Surely, this is not first best.  But that isn’t the relevant question.

There are other more fundamental problems with the notion of patent holdup as an antitrust concern.  Kobayashi & Wright also raise concerns with the theoretical case for antitrust enforcement of patent holdup on several grounds.  One is that high probability of detection of patent holdup coupled with antitrust’s treble damages makes overdeterrence highly likely.  Another is that alternative remedies such as contract and the patent doctrine of equitable estoppel render the marginal benefits of antitrust enforcement trivial or negative in this context.  Froeb, Ganglmair & Werden raise similar points.   Suffice it to say that the debate on the appropriate scope of antitrust enforcement in patent holdup is ongoing as a general matter; there is certainly no consensus with regard to economic theory or empirical evidence that stripping the availability of injunctive relief from patent holders entering into contractual relationships with SSOs will enhance competition or improve consumer welfare.  It is quite possible that such an intervention would chill competition, participation in SSOs, and the efficient contracting process potentially facilitated by the availability of injunctive relief.

The policy debate I describe above is an important one.  Many of the questions at the center of that complex debate are not settled as a matter of economic theory, empirics, or law.  This post certainly has no ambitions to resolve them here; my goal is a much more modest one.  The DOJs policymaking efforts through the merger review process raise serious issues.  I would hope that all would agree — regardless of where they stand on the patent holdup debate — that the idea that these complex debates be hammered out in merger review at the DOJ because the DOJ happens to have a number of cases involving patent portfolios is a foolish one for several reasons.

First, it is unclear the DOJ could have extracted these FRAND concessions through proper merger review.  The DOJ apparently agreed that the transactions did not raise serious competitive concerns.   The pressure imposed by the DOJ upon the parties to make the commitments to the SSOs not to pursue injunctive relief as part of a FRAND commitment outside of the normal consent process raises serious concerns.  The imposition of settlement conditions far afield from the competitive consequences of the merger itself is something we do see from antitrust enforcement agencies in other countries quite frequently, but this sort of behavior burns significant reputational capital with the rest of the world when our agencies go abroad to lecture on the importance of keeping antitrust analysis consistent, predictable, and based upon the economic fundamentals of the transaction at hand.

Second, the DOJ Antitrust Division does not alone have comparative advantage in determining the optimal use of injunctions versus damages in the patent system.

Third, appearances here are quite problematic.  Given that the DOJ did not appear to have significant competitive concerns with the transactions, one can create the following narrative of events without too much creative effort: (1) the DOJ team has theoretical priors that injunctive relief is a significant competitive problem, (2) the DOJ happens to have these mergers in front of it pending review from a couple of firms likely to be repeat players in the antitrust enforcement game, (3) the DOJ asks the firms to make these concessions despite the fact that they have little to do with the conventional antitrust analysis of the transactions, under which they would have been approved without condition.

The more I think about the use of the merger review process to extract concessions from patent holders in the form of promises not to enforce property rights which they would otherwise be legally entitled to, the more the DOJ’s actions appear inappropriate.  The stakes are high here both in terms of identifying patent and competition rules that will foster rather than hamper innovation, but also with respect to compromising the integrity of merger review through the imposition of non-merger related conditions we are more akin to seeing from the FCC, states, or less well-developed antitrust regimes.

The Federal Trade Commission conference announcement is below; note that public comments on the date of the conference.  This is an important space and should attract some excellent speakers.  The topics suggest a greater focus on consumer protection than competition issues.  Here is the announcement:

The Federal Trade Commission will host a workshop on April 26, 2012, to examine the use of mobile payments in the marketplace and how this emerging technology impacts consumers. This event will bring together consumer advocates, industry representatives, government regulators, technologists, and academics to examine a wide range of issues, including the technology and business models used in mobile payments, the consumer protection issues raised, and the experiences of other nations where mobile payments are more common. The workshop will be free and open to the public.

Topics may include:

  • What different technologies are used to make mobile payments and how are the technologies funded (e.g., credit card, debit card, phone bill, prepaid card, gift card, etc.)?
  • Which technologies are being used currently in the United States, and which are likely to be used in the future?
  • What are the risks of financial losses related to mobile payments as compared to other forms of payment? What recourse do consumers have if they receive fraudulent, unauthorized, and inaccurate charges? Do consumers understand these risks? Do consumers receive disclosures about these risks and any legal protections they might have?
  • When a consumer uses a mobile payment service, what information is collected, by whom, and for what purpose? Are these data collection practices disclosed to consumers? Is the data protected?
  • How have mobile payment technologies been implemented in other countries, and with what success? What, if any, consumer protection issues have they faced, and how have they dealt with them?
  • What steps should government and industry members take to protect consumers who use mobile payment services?

To aid in preparation for the workshop, FTC staff welcomes comments from the public, including original research, surveys and academic papers. Electronic comments can be made at Paper comments should be mailed or delivered to: 600 Pennsylvania Avenue N.W., Room H-113 (Annex B), Washington, DC 20580.

The workshop is free and open to the public; it will be held at the FTC’s Satellite Building Conference Center, 601 New Jersey Avenue, N.W., Washington, D.C.