Archives For technology

I have a new article on the Comcast/Time Warner Cable merger in the latest edition of the CPI Antitrust Chronicle, which includes several other articles on the merger, as well.

In a recent essay, Allen Grunes & Maurice Stucke (who also have an essay in the CPI issue) pose a thought experiment: If Comcast can acquire TWC, what’s to stop it acquiring all cable companies? The authors’ assertion is that the arguments being put forward to support the merger contain no “limiting principle,” and that the same arguments, if accepted here, would unjustifiably permit further consolidation. But there is a limiting principle: competitive harm. Size doesn’t matter, as courts and economists have repeatedly pointed out.

The article explains why the merger doesn’t give rise to any plausible theory of anticompetitive harm under modern antitrust analysis. Instead, arguments against the merger amount to little more than the usual “big-is-bad” naysaying.

In summary, I make the following points:

Horizontal Concerns

The absence of any reduction in competition should end the inquiry into any potentially anticompetitive effects in consumer markets resulting from the horizontal aspects of the transaction.

  • It’s well understood at this point that Comcast and TWC don’t compete directly for subscribers in any relevant market; in terms of concentration and horizontal effects, the transaction will neither reduce competition nor restrict consumer choice.
  • Even if Comcast were a true monopolist provider of broadband service in certain geographic markets, the DOJ would have to show that the merger would be substantially likely to lessen competition—a difficult showing to make where Comcast and TWC are neither actual nor potential competitors in any of these markets.
  • Whatever market power Comcast may currently possess, the proposed merger simply does nothing to increase it, nor to facilitate its exercise.

Comcast doesn’t currently have substantial bargaining power in its dealings with content providers, and the merger won’t change that. The claim that the combined entity will gain bargaining leverage against content providers from the merger, resulting in lower content prices to programmers, fails for similar reasons.

  • After the transaction, Comcast will serve fewer than 30 percent of total MVPD subscribers in the United States. This share is insufficient to give Comcast market power over sellers of video programming.
  • The FCC has tried to impose a 30 percent cable ownership cap, and twice it has been rejected by the courts. The D.C. Circuit concluded more than a decade ago—in far less competitive conditions than exist today—that the evidence didn’t justify a horizontal ownership limit lower than 60% on the basis of buyer power.
  • The recent exponential growth in OVDs like Google, Netflix, Amazon and Apple gives content providers even more ways to distribute their programming.
  • In fact, greater concentration among cable operators has coincided with an enormous increase in output and quality of video programming
  • Moreover, because the merger doesn’t alter the competitive make-up of any relevant consumer market, Comcast will have no greater ability to threaten to withhold carriage of content in order to extract better terms.
  • Finally, programmers with valuable content have significant bargaining power and have been able to extract the prices to prove it. None of that will change post-merger.

Vertical Concerns

The merger won’t give Comcast the ability (or the incentive) to foreclose competition from other content providers for its NBCUniversal content.

  • Because the merger would represent only 30 percent of the national market (for MVPD services), 70 percent of the market is still available for content distribution.
  • But even this significantly overstates the extent of possible foreclosure. OVD providers increasingly vie for the same content as cable (and satellite).
  • In the past when regulators have considered foreclosure effects for localized content (regional sports networks, primarily)—for example, in the 2005 Adelphia/Comcast/TWC deal, under far less competitive conditions—the FTC found no substantial threat of anticompetitive harm. And while the FCC did identify a potential risk of harm in its review of the Adelphia deal, its solution was to impose arbitration requirements for access to this programming—which are already part of the NBCUniversal deal conditions and which will be extended to the new territory and new programming from TWC.

The argument that the merger will increase Comcast’s incentive and ability to impair access to its users by online video competitors or other edge providers is similarly without merit.

  • Fundamentally, Comcast benefits from providing its users access to edge providers, and it would harm itself if it were to constrain access to these providers.
  • Foreclosure effects would be limited, even if they did arise. On a national level, the combined firm would have only about 40 percent of broadband customers, at most (and considerably less if wireless broadband is included in the market).
  • This leaves at least 60 percent—and quite possibly far more—of customers available to purchase content and support edge providers reaching minimum viable scale, even if Comcast were to attempt to foreclose access.

Some have also argued that because Comcast has a monopoly on access to its customers, transit providers are beholden to it, giving it the ability to degrade or simply block content from companies like Netflix. But these arguments misunderstand the market.

  • The transit market through which edge providers bring their content into the Comcast network is highly competitive. Edge providers can access Comcast’s network through multiple channels, undermining Comcast’s ability to deny access or degrade service to such providers.
  • The transit market is also almost entirely populated by big players engaged in repeat interactions and, despite a large number of transactions over the years, marked by a trivial number of disputes.
  • The recent Comcast/Netflix agreement demonstrates that the sophisticated commercial entities in this market are capable of resolving conflicts—conflicts that appear to affect only the distribution of profits among contracting parties but not raise anticompetitive concerns.
  • If Netflix does end up paying more to access Comcast’s network over time, it won’t be because of market power or this merger. Rather, it’s an indication of the evolving market and the increasing popularity of OTT providers.
  • The Comcast/Netflix deal has procompetitive justifications, as well. Charging Netflix allows Comcast to better distinguish between the high-usage Netflix customers (two percent of Netflix users account for 20 percent of all broadband traffic) and everyone else. This should lower cable bills on average, improve incentives for users, and lead to more efficient infrastructure investments by both Comcast and Netflix.

Critics have also alleged that the vertically integrated Comcast may withhold its own content from competing MVPDs or OVDs, or deny carriage to unaffiliated programming. In theory, by denying competitors or potential competitors access to popular programming, a vertically integrated MVPD might gain a competitive advantage over its rivals. Similarly, an MVPD that owns cable channels may refuse to carry at least some unaffiliated content to benefit its own channels. But these claims also fall flat.

  • Once again, these issue are not transaction specific.
  • But, regardless, Comcast will not be able to engage in successful foreclosure strategies following the transaction.
  • The merger has no effect on Comcast’s share of national programming. And while it will have a larger share of national distribution post-merger, a 30 percent market share is nonetheless insufficient to confer buyer power in today’s highly competitive MVPD market.
  • Moreover, the programming market is highly dynamic and competitive, and Comcast’s affiliated programming networks face significant competition.
  • Comcast already has no ownership interest in the overwhelming majority of content it distributes. This won’t measurably change post-transaction.

Procompetitive Justifications

While the proposed transaction doesn’t give rise to plausible anticompetitive harms, it should bring well-understood pro-competitive benefits. Most notably:

  • The deal will bring significant scale efficiencies in a marketplace that requires large, fixed-cost investments in network infrastructure and technology.
  • And bringing a more vertical structure to TWC will likely be beneficial, as well. Vertical integration can increase efficiency, and the elimination of double marginalization often leads to lower prices for consumers.

Let’s be clear about the baseline here. Remember all those years ago when Netflix was a mail-order DVD company? Before either Netflix or Comcast even considered using the internet to distribute Netflix’s video content, Comcast invested in the technology and infrastructure that ultimately enabled the Netflix of today. It did so at enormous cost (tens of billions of dollars over the last 20 years) and risk. Absent broadband we’d still be waiting for our Netflix DVDs to be delivered by snail mail, and Netflix would still be spending three-quarters of a billion dollars a year on shipping.

The ability to realize returns—including returns from scale—is essential to incentivizing continued network and other quality investments. The cable industry today operates with a small positive annual return on invested capital (“ROIC”) but it has had cumulative negative ROIC over the entirety of the last decade. In fact, on invested capital of $127 billion between 2000 and 2009, cable has seen economic profits of negative $62 billion and a weighted average ROIC of negative 5 percent. Meanwhile Comcast’s stock has significantly underperformed the S&P 500 over the same period and only outperformed the S&P over the last two years.

Comcast is far from being a rapacious and endlessly profitable monopolist. This merger should help it (and TWC) improve its cable and broadband services, not harm consumers.

No matter how many times Al Franken and Susan Crawford say it, neither the broadband market nor the MVPD market is imperiled by vertical or horizontal integration. The proposed merger won’t create cognizable antitrust harms. Comcast may get bigger, but that simply isn’t enough to thwart the merger.

As Geoff posted yesterday, a group of 72 distinguished economists and law professors from across the political spectrum released a letter to Chris Christie pointing out the absurdities of New Jersey’s direct distribution ban. I’m heartened that both Governor Christie and his potential rival for the 2016 Republican nomination, Texas Governor Rick Perry, have made statements, here and here, in recent days suggesting that they would support legislation to allow direct distribution. Another potential 2016 Republican contender, has also joined the anti-protectionist fray. This should not be a partisan political issue. Hopefully, thinking people from both parties will realize that these laws help no one but the car dealers.

In the midst of these encouraging developments, I came across a March 5, 2014 letter from General Motors to Ohio Governor John Kasich complaining about proposed legislation that would carve out a special direct-dealing exemption for Tesla in Ohio. I’ve gotta say that I’m sympathetic to GM’s plight. It isn’t fair that Tesla would get a special exemption from regulations applicable to other car dealers. I’m not blaming Tesla, since I assume and hope that Tesla’s legislative strategy is to ask that these laws be repealed or that Tesla be exempted, not that the laws should continue to apply to other manufacturers. But the point of our letter is that no manufacturer should be subject to these restrictions. Tesla may have special reasons to prefer direct distribution, but the laws should be general—and generally permissive of direct distribution. The last thing we need is for a continuation of the dealers’ crony capitalism through a system of selective exemptions from protectionist statutes.

What was most telling about GM’s letter was its straightforward admission that allowing Tesla to engage in direct distribution would give Tesla a “distinct competitive advantage” and would create a “significant disparate impact” on competition in the auto industry. That’s just another way of saying that direct distribution is more efficient. If Tesla will gain a competitive advantage by bypassing dealers, shouldn’t we want all car companies to have that same advantage?

To be clear, there are circumstances were exempting just select companies from a regulatory scheme would give them a competitive advantage not based on superior efficiency in a social-welfare enhancing sense. For example, if the general pollution control regulations are optimally set, then exempting some firms will allow them to externalize costs and thereby obtain a competitive advantage, reducing net social welfare. But that would only be the case if the regulated activity is socially harmful, which direct distribution is not, as our open letter explained. The take-away from GM’s letter should be even more impetus for repealing the direct distribution bans across the board so that consumers can enjoy the benefit of competition among rival manufacturers who all have the right to choose the most efficient means of distribution for them.

Earlier this month New Jersey became the most recent (but likely not the last) state to ban direct sales of automobiles. Although the rule nominally applies more broadly, it is directly aimed at keeping Tesla Motors (or at least its business model) out of New Jersey. Automobile dealers have offered several arguments why the rule is in the public interest, but a little basic economics reveals that these arguments are meritless.

Today the International Center for Law & Economics sent an open letter to New Jersey Governor Chris Christie, urging reconsideration of the regulation and explaining why the rule is unjustified — except as rent-seeking protectionism by independent auto dealers.

The letter, which was principally written by University of Michigan law professor, Dan Crane, and based in large part on his blog posts here at Truth on the Market (see here and here), was signed by more than 70 economists and law professors.

As the letter notes:

The Motor Vehicle Commission’s regulation was aimed specifically at stopping one company, Tesla Motors, from directly distributing its electric cars. But the regulation would apply equally to any other innovative manufacturer trying to bring a new automobile to market, as well. There is no justification on any rational economic or public policy grounds for such a restraint of commerce. Rather, the upshot of the regulation is to reduce competition in New Jersey’s automobile market for the benefit of its auto dealers and to the detriment of its consumers. It is protectionism for auto dealers, pure and simple.

The letter explains at length the economics of retail distribution and the misguided, anti-consumer logic of the regulation.

The letter concludes:

In sum, we have not heard a single argument for a direct distribution ban that makes any sense. To the contrary, these arguments simply bolster our belief that the regulations in question are motivated by economic protectionism that favors dealers at the expense of consumers and innovative technologies. It is discouraging to see this ban being used to block a company that is bringing dynamic and environmentally friendly products to market. We strongly encourage you to repeal it, by new legislation if necessary.

Among the letter’s signatories are some of the country’s most prominent legal scholars and economists from across the political spectrum.

Read the letter here:

Open Letter to New Jersey Governor Chris Christie on the Direct Automobile Distribution Ban

Last summer I blogged here at TOTM about the protectionist statutes designed to preempt direct distribution of Tesla cars that are proliferating around the country. This week, New Jersey’s Motor Vehicle Commission voted to add New Jersey to the list of states bowing to the politically powerful car dealers’ lobby.

Yesterday, I was on Bloomberg’s Market Makers show with Jim Appleton, the president of the New Jersey Coalition of Automotive Retailers. (The clip is here). Mr. Appleton advanced several “very interesting” arguments against direct distribution of cars, including that we already regulate everything else from securities sales to dogs and cats, so why not regulate car sales as well. The more we regulate, the more we should regulate. Good point. I’m stumped. But moving on, Mr. Appleton also argued that this particular regulation is necessary for actual reasons, and he gave two.

First, he argued that Tesla has a monopoly and that the direct distribution prohibition would create price competition. But, of course, Tesla does not have anything like a monopoly. A point that Mr. Appleton repeated three times over the course of our five minutes yesterday was that Tesla’s market share in New Jersey is 0.1%. Sorry, not a monopoly.

Mr. Appleton then insisted that the relevant “monopoly” is over the Tesla brand. This argument misunderstands basic economics. Every seller has a “monopoly” in its own brand to the same extent as Mr. Appleton has a “monopoly” in the tie he wore yesterday. No one but Tesla controls the Tesla brand, and no one but Mr. Appleton controls his tie. But, as economists have understood for a very long time, it would be absurd to equate monopoly power in an economic sense with the exclusive legal right to control something. Otherwise, every man, woman, child, dog, and cat is a monopolist over a whole bunch of things. The word monopoly can only make sense as capturing the absence of rivalry between sellers of different brands. A seller can have monopoly power in its brand, but only if there are not other brands that are reasonable substitutes. And, of course, there are many reasonable substitutes for Teslas.

Nor will forcing Tesla to sell through dealers create “price competition” for Teslas to the benefit of consumers. As I explained in my post last summer, Tesla maximizes its profits by minimizing its cost of distribution. If dealers can perform that function more efficiently than Tesla, Tesla has every incentive to distribute through dealers. The one thing Tesla cannot do is increase its profits by charging more for the retail distribution function than dealers would charge. Whatever the explanation for Tesla’s decision to distribute directly may be, it has nothing to do with charging consumers a monopoly price for the distribution of Teslas.

Mr. Appleton’s second argument was that the dealer protection laws are necessary for consumer safety. He then pointed to the news that GM might have prevented accidents taking 12 lives if it had recalled some of its vehicles earlier than it eventually did. But of course all of this occurred while GM was distributing through franchised dealers. To take Mr. Appleton’s logic, I should have been arguing that distribution through franchised dealers kills people.

Mr. Appleton then offered a concrete argument on car safety. He said that, to manufacturers, product recalls are a cost whereas, to dealers, they are an opportunity to earn income. But that argument is also facially absurd. Dealers don’t make the decision to issue safety recalls. Those decisions come from the manufacturer and the National Highway Traffic Safety Administration. Dealers benefit only incidentally.

The direct distribution laws have nothing to do with enhancing price competition or car safety. They are protectionism for dealers, pure and simple. At a time when Chris Christie is trying to regain credibility with New Jersey voters in general, and New Jersey motorists in particular, this development is a real shame.

Today the D.C. Circuit struck down most of the FCC’s 2010 Open Internet Order, rejecting rules that required broadband providers to carry all traffic for edge providers (“anti-blocking”) and prevented providers from negotiating deals for prioritized carriage. However, the appeals court did conclude that the FCC has statutory authority to issue “Net Neutrality” rules under Section 706(a) and let stand the FCC’s requirement that broadband providers clearly disclose their network management practices.

The following statement may be attributed to Geoffrey Manne and Berin Szoka:

The FCC may have lost today’s battle, but it just won the war over regulating the Internet. By recognizing Section 706 as an independent grant of statutory authority, the court has given the FCC near limitless power to regulate not just broadband, but the Internet itself, as Judge Silberman recognized in his dissent.

The court left the door open for the FCC to write new Net Neutrality rules, provided the Commission doesn’t treat broadband providers as common carriers. This means that, even without reclassifying broadband as a Title II service, the FCC could require that any deals between broadband and content providers be reasonable and non-discriminatory, just as it has required wireless carriers to provide data roaming services to their competitors’ customers on that basis. In principle, this might be a sound approach, if the rule resembles antitrust standards. But even that limitation could easily be evaded if the FCC regulates through case-by-case enforcement actions, as it tried to do before issuing the Open Internet Order. Either way, the FCC need only make a colorable argument under Section 706 that its actions are designed to “encourage the deployment… of advanced telecommunications services.” If the FCC’s tenuous “triple cushion shot” argument could satisfy that test, there is little limit to the deference the FCC will receive.

But that’s just for Net Neutrality. Section 706 covers “advanced telecommunications,” which seems to include any information service, from broadband to the interconnectivity of smart appliances like washing machines and home thermostats. If the court’s ruling on Section 706 is really as broad as it sounds, and as the dissent fears, the FCC just acquired wide authority over these, as well — in short, the entire Internet, including the “Internet of Things.” While the court’s “no common carrier rules” limitation is a real one, the FCC clearly just gained enormous power that it didn’t have before today’s ruling.

Today’s decision essentially rewrites the Communications Act in a way that will, ironically, do the opposite of what the FCC claims: hurt, not help, deployment of new Internet services. Whatever the FCC’s role ought to be, such decisions should be up to our elected representatives, not three unelected FCC Commissioners. So if there’s a silver lining in any of this, it may be that the true implications of today’s decision are so radical that Congress finally writes a new Communications Act — a long-overdue process Congressmen Fred Upton and Greg Walden have recently begun.

Szoka and Manne are available for comment at media@techfreedom.org. Find/share this release on Facebook or Twitter.

For those in the DC area interested in telecom regulation, there is another great event opportunity coming up next week.

Join TechFreedom on Thursday, December 19, the 100th anniversary of the Kingsbury Commitment, AT&T’s negotiated settlement of antitrust charges brought by the Department of Justice that gave AT&T a legal monopoly in most of the U.S. in exchange for a commitment to provide universal service.

The Commitment is hailed by many not just as a milestone in the public interest but as the bedrock of U.S. communications policy. Others see the settlement as the cynical exploitation of lofty rhetoric to establish a tightly regulated monopoly — and the beginning of decades of cozy regulatory capture that stifled competition and strangled innovation.

So which was it? More importantly, what can we learn from the seventy year period before the 1984 break-up of AT&T, and the last three decades of efforts to unleash competition? With fewer than a third of Americans relying on traditional telephony and Internet-based competitors increasingly driving competition, what does universal service mean in the digital era? As Congress contemplates overhauling the Communications Act, how can policymakers promote universal service through competition, by promoting innovation and investment? What should a new Kingsbury Commitment look like?

Following a luncheon keynote address by FCC Commissioner Ajit Pai, a diverse panel of experts moderated by TechFreedom President Berin Szoka will explore these issues and more. The panel includes:

  • Harold Feld, Public Knowledge
  • Rob Atkinson, Information Technology & Innovation Foundation
  • Hance Haney, Discovery Institute
  • Jeff Eisenach, American Enterprise Institute
  • Fred Campbell, Former FCC Commissioner

Space is limited so RSVP now if you plan to attend in person. A live stream of the event will be available on this page. You can follow the conversation on Twitter on the #Kingsbury100 hashtag.

When:
Thursday, December 19, 2013
11:30 – 12:00 Registration & lunch
12:00 – 1:45 Event & live stream

The live stream will begin on this page at noon Eastern.

Where:
The Methodist Building
100 Maryland Ave NE
Washington D.C. 20002

Questions?
Email contact@techfreedom.org.

Over at the Center for the Protection of Intellectual Property (CPIP), Mark Schultz has an important blog posting on the Mercatus Center‘s recent launch of its new copyright piracy website, piracydata.org.  The launch of this website has caused a bit of a tempest in a teapot with a positive report on it in the Washington Post and with a report in the Columbia Journalism Review pointing out problems in its data and errors in its claims.  (It is a bit ironic that a libertarian organization is having trouble with the launch of a website at the same time that there is similar reporting on troubles of the launch of another website on the opposite side of the political spectrum, Obamacare.)

Professor Schultz, who is a Senior Scholar at CPIP and a law professor at Southern Illinois University, makes many important points in his blog posting (too many to recount here).  One of his more important identifications is that the piracydata.org website reflects an unfortunate tendency among libertarian IP skeptics, who seem to fall victim to an error that they often identify in leftist critiques of the free market, at least on non-IP issues.  That is, some libertarian IP skeptics seem all to quick to deduce conclusions about actual, real-world business models from solely theoretical knowledge about what they think these business models should be in some “ideal” world.

Professor Schultz also identifies that, despite protestations to the contrary, Jerry Brito has explicitly framed his website as a “blame the victim” defense of copyright piracy — stating explicitly on Twitter that “Hollywood should blame itself for its piracy problems.” Consistent with such statements, of course, conventional wisdom has quickly gelled around the piracydata.org website that it is in fact a condemnation of the creative industries’ business models.  (Professor Schultz backs up this point with many references and links, including a screen grab of Jerry’s tweet.)

Professor Schultz ultimately concludes his important essay as follows:

perhaps the authors should simply dispense with the pretext. All too often, we see arguments such as this that say ‘I think copyright is important and abhor piracy, BUT . . . ‘ And, after the “but” comes outrage at most any attempt by creators to enforce their rights and protect their investment. Or, as in this case, advice that excuses piracy and counsels surrender to piracy as the only practical way forward. Perhaps it would be less hypocritical for such commentators to admit that they are members of the Copyleft. While I think that it’s a terribly misguided and unfortunate position, it is all too respectable in libertarian circles these days. See the debate in which I participated earlier this year in Cato Unbound.

In any event, however, how about a little more modesty and a little more respect for copyright owners? In truth, the “content” industry leaders I’ve met are, as I’ve told them, way smarter than the Internet says they are. They are certainly smarter about their business than any policy analysts or other Washingtonians I’ve met.

The movie industry knows these numbers very well and knows about the challenges imposed by its release windows. They know their business better than their critics. All sorts of internal, business, and practical constraints may keep them from fixing their problems overnight, but it’s not a lack of will or insight that’s doing it. If you love the free market, then perhaps it’s time to respect the people with the best information about their property and the greatest motivation to engage in mutually beneficial voluntary exchanges.

Or you can just contribute to the mountain of lame excuses for piracy that have piled up over the last decade.

This is a compelling call to arms  for some libertarians doing policy work in the creative industries to take more seriously in practice their theoretical commitments to private ordering and free enterprise.

As the blogging king (Instapundit) is wont to say: Read the whole thing.

The debates over mobile spectrum aggregation and the auction rules for the FCC’s upcoming incentive auction — like all regulatory rent-seeking — can be farcical. One aspect of the debate in particular is worth highlighting, as it puts into stark relief the tendentiousness of self-interested companies making claims about the public interestedness of their preferred policies: The debate over how and whether to limit the buying and aggregating of lower frequency (in this case 600 MHz) spectrum.

A little technical background is in order. At its most basic, a signal carried in higher frequency spectrum doesn’t travel as well as a signal carried in lower frequency spectrum. The higher the frequency, the closer together cell towers need to be to maintain a good signal.

600MHz is relatively low frequency for wireless communications. In rural areas it is helpful in reducing infrastructure costs for wide area coverage because cell towers can be placed further apart and thus fewer towers must be built. But in cities, population density trumps frequency, and propagation range is essentially irrelevant for infrastructure costs. In other words, it doesn’t matter how far your signal will travel if congestion alleviation demands you build cell towers closer together than even the highest frequency spectrum requires anyway. The optimal — nay, the largest usable — cell radius in urban and suburban areas is considerably smaller than the sort of cell radius that low frequency spectrum allows for.

It is important to note, of course, that signal distance isn’t the only propagation characteristic imparting value to lower frequency spectrum; in particular, it is also valuable even in densely populated settings for its ability to travel through building walls. That said, however, the primary arguments made in favor of spreading the 600 MHz wealth — of effectively subsidizing its purchase by smaller carriers — are rooted in its value in offering more efficient coverage in less-populated areas. Thus the FCC has noted that while there may be significant infrastructure cost savings associated with deploying lower frequency networks in rural areas, this lower frequency spectrum provides little cost advantage in urban or suburban areas (even though, as noted, it has building-penetrating value there).

It is primarily because of these possible rural network cost advantages that certain entities (the Department of Justice, Free Press, the Competitive Carriers Association, e.g.) have proposed that AT&T and Verizon (both of whom have significant lower frequency spectrum holdings) should be restricted from winning “too much” spectrum in the FCC’s upcoming 600 MHz incentive auctions. The argument goes that, in order to ensure national competition — that is, to give other companies financial incentive to build out their networks into rural areas — the auction should be structured to favor Sprint and T-Mobile (both of whose spectrum holdings are mostly in the upper frequency bands) as awardees of this low-frequency spectrum, at commensurately lower cost.

Shockingly, T-Mobile and Sprint are on board with this plan.

So, to recap: 600MHz spectrum confers cost savings when used in rural areas. It has much less effect on infrastructure costs in urban and suburban areas. T-Mobile and Sprint don’t have much of it; AT&T and Verizon have lots. If we want T-Mobile and Sprint to create the competing national networks that the government seems dead set on engineering, we need to put a thumb on the scale in the 600MHz auctions. So they can compete in rural areas. Because that’s where 600MHz spectrum offers cost advantages. In rural areas.

So what does T-Mobile plan to do if it wins the spectrum lottery? Certainly not build in rural areas. As Craig Moffett notes, currently “T-Mobile’s U.S. network is fast…but coverage is not its strong suit, particularly outside of metro areas.” And for the future? T-mobile’s breakneck LTE coverage ramp up since the failed merger with AT&T is expected to top out at 225 million people, or the 71% of consumers living in the most-populated areas (it’s currently somewhere over 200 million). “Although sticking to a smaller network, T-Mobile plans to keep increasing the depth of its LTE coverage” (emphasis added). Depth. That means more bandwidth in high-density areas. It does not mean broader coverage. Obviously.

Sprint, meanwhile, is devoting all of its resources to playing LTE catch-up in the most-populated areas; it isn’t going to waste valuable spectrum resources on expanded rural build out anytime soon.

The kicker is that T-Mobile relies on AT&T’s network to provide its urban and suburban customers with coverage (3G) when they do roam into rural areas, taking advantage of a merger break-up provision that gives it roaming access to AT&T’s 3G network. In other words, T-Mobile’s national network is truly “national” only insofar as it piggybacks on AT&T’s broader coverage. And because AT&T will get the blame for congestion when T-Mobile’s customers roam onto its network, the cost to T-Mobile of hamstringing AT&T’s network is low.

The upshot is that T-Mobile seems not to need, nor does it intend to deploy, lower frequency spectrum to build out its network in less-populated areas. Defenders say that rigging the auction rules to benefit T-Mobile and Sprint will allow them to build out in rural areas to compete with AT&T’s and Verizon’s broader networks. But this is a red herring. They may get the spectrum, but they won’t use it to extend their coverage in rural areas; they’ll use it to add “depth” to their overloaded urban and suburban networks.

But for AT&T the need for additional spectrum is made more acute by the roaming deal, which requires it to serve its own customers and those of T-Mobile.

This makes clear the reason underlying T‑Mobile’s advocacy for rigging the 600 MHz auction – it is simply so that T‑Mobile can acquire this spectrum on the cheap to use in urban and suburban areas, not so that it can deploy a wide rural network. And the beauty of it is that by hamstringing AT&T’s ability to acquire this spectrum, it becomes more expensive for AT&T to serve T‑Mobile’s own customers!

Two birds, one stone: lower your costs, raise your competitor’s costs.

The lesson is this: If we want 600 MHz spectrum to be used efficiently to provide rural LTE service, we should assume that the highest bidder will make the most valuable use of the spectrum. The experience of the relatively unrestricted 700 MHz auction in 2008 confirms this. The purchase of 700 MHz spectrum by AT&T and Verizon led to the US becoming the world leader in LTE. Why mess with success?

[Cross-posted at RedState]

I have a new post up at TechPolicyDaily.com, excerpted below, in which I discuss the growing body of (surprising uncontroversial) work showing that broadband in the US compares favorably to that in the rest of the world. My conclusion, which is frankly more cynical than I like, is that concern about the US “falling behind” is manufactured debate. It’s a compelling story that the media likes and that plays well for (some) academics.

Before the excerpt, I’d also like to quote one of today’s headlines from Slashdot:

“Google launched the citywide Wi-Fi network with much fanfare in 2006 as a way for Mountain View residents and businesses to connect to the Internet at no cost. It covers most of the Silicon Valley city and worked well until last year, as Slashdot readers may recall, when connectivity got rapidly worse. As a result, Mountain View is installing new Wi-Fi hotspots in parts of the city to supplement the poorly performing network operated by Google. Both the city and Google have blamed the problems on the design of the network. Google, which is involved in several projects to provide Internet access in various parts of the world, said in a statement that it is ‘actively in discussions with the Mountain View city staff to review several options for the future of the network.’”

The added emphasis is mine. It is added to draw attention to the simple point that designing and building networks is hard. Like, really really hard. Folks think that it’s easy, because they have small networks in their homes or offices — so surely they can scale to a nationwide network without much trouble. But all sorts of crazy stuff starts to happen when we substantially increase the scale of IP networks. This is just one of the very many things that should give us pause about calls for the buildout of a government run or sponsored Internet infrastructure.

Another of those things is whether there’s any need for that. Which brings us to my TechPolicyDaily.com post:

In the week or so since TPRC, I’ve found myself dwelling on an observation I made during the conference: how much agreement there was, especially on issues usually thought of as controversial. I want to take a few paragraphs to consider what was probably the most surprisingly non-controversial panel of the conference, the final Internet Policy panel, in which two papers - one by ITIF’s Rob Atkinson and the other by James McConnaughey from NTIA – were presented that showed that broadband Internet service in US (and Canada, though I will focus on the US) compares quite well to that offered in the rest of the world. [...]

But the real question that this panel raised for me was: given how well the US actually compares to other countries, why does concern about the US falling behind dominate so much discourse in this area? When you get technical, economic, legal, and policy experts together in a room – which is what TPRC does – the near consensus seems to be that the “kids are all right”; but when you read the press, or much of the high-profile academic literature, “the sky is falling.”

The gap between these assessments could not be larger. I think that we need to think about why this is. I hate to be cynical or disparaging – especially since I know strong advocates on both sides and believe that their concerns are sincere and efforts earnest. But after this year’s conference, I’m having trouble shaking the feeling that ongoing concern about how US broadband stacks up to the rest of the world is a manufactured debate. It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. [...]

Compare this to the Chicken Little narrative. As I was writing this, I received a message from a friend asking my views on an Economist blog post that shares data from the ITU’s just-released Measuring the Information Society 2013 report. This data shows that the US has some of the highest prices for pre-paid handset-based mobile data around the world. That is, it reports the standard narrative – and it does so without looking at the report’s methodology. [...]

Even more problematic than what the Economist blog reports, however, is what it doesn’t report. [The report contains data showing the US has some of the lowest cost fixed broadband and mobile broadband prices in the world. See the full post at TechPolicyDaily.com for the numbers.]

Now, there are possible methodological problems with these rankings, too. My point here isn’t to debate over the relative position of the United States. It’s to ask why the “story” about this report cherry-picks the alarming data, doesn’t consider its methodology, and ignores the data that contradicts its story.

Of course, I answered that question above: It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. Manufacturing debate sells copy and ads, and advances careers.

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.