Archives For Gigi Sohn

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services. 

History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it. 

Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:

Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.

The Perennial Threat of Price Controls

Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation. 

Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.” 

But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)). 

This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:

The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)

For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation

Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:

President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)

Senate Majority Leader Chuck Schumer (D-N.Y.) similarly suggested in a 2018 speech that broadband affordability should be ensured: 

[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.

And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).

Rate Regulation by Any Other Name

Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes: 

The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.

The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.

What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”

States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:

In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.

The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:

[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication

The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:

Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

The Economics of Price Controls

Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”

The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.

Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:

When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)

Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:

The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)

Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.

None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment). 

Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”: 

  1. consumer access to lawful content, applications, and services; or
  2. content providers’ ability to distribute lawful content, applications or services.

The FCC thus signaled quite clearly that it would deem many zero-rating arrangements as manifestly “unreasonable.” Yet, for mobile customers who want to consume only a limited amount of data, zero rating of popular apps or other data uses is, in most cases, a net benefit for consumer welfare

These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….

…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.

Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.

Conclusion

Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation. 

While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.

Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.

[Cross-posted at Tech Liberation Front]

Milton Mueller responded to my post Wednesday on the DOJ’s decision to halt the AT&T/T-Mobile merger by asserting that there was no evidence the merger would lead to “anything innovative and progressive” and claiming “[t]he spectrum argument fell apart months ago, as factual inquiries revealed that AT&T had more spectrum than Verizon and the mistakenly posted lawyer’s letter revealed that it would be much less expensive to expand its capacity than to acquire T-Mobile.”  With respect to Milton, I think he’s been suckered by the “big is bad” crowd at Public Knowledge and Free Press.  But he’s hardly alone and these claims — claims that may well have under-girded the DOJ’s decision to step in to some extent — merit thorough refutation.

To begin with, LTE is “progress” and “innovation” over 3G and other quasi-4G technologies.  AT&T is attempting to make an enormous (and risky) investment in deploying LTE technology reliably and to almost everyone in the US–something T-Mobile certainly couldn’t do on its own and something AT&T would have been able to do only partially and over a longer time horizon and, presumably, at greater expense.  Such investments are exactly the things that spur innovation across the ecosystem in the first place.  No doubt AT&T’s success here would help drive the next big thing–just as quashing it will make the next big thing merely the next medium-sized thing.

The “Spectrum Argument”

The spectrum argument that Milton claims “fell apart months ago” is the real story here, the real driver of this merger, and the reason why the DOJ’s action yesterday is, indeed, a blow to progress.  That argument, unfortunately, still stands firm.  Even more, the irony is that to a significant extent the spectrum shortfall is a product of the government’s own making–through mismanagement of spectrum by the FCC, political dithering by Congress, and local government intransigence on tower siting and co-location–and the notion of the government now intervening here to “fix” one of the most significant private efforts to make progress despite these government impediments is really troubling.

Anyway, here’s what we know about spectrum:  There isn’t enough of it in large enough blocks and in bands suitable for broadband deployment using available technology to fully satisfy current–let alone future–demand.

Two incredibly detailed government sources for this conclusion are the FCC’s 15th Annual Wireless Competition Report and the National Broadband Plan.  Here’s FCC Chairman Julius Genachowski summarizing the current state of affairs (pdf):

The point deserves emphasis:  the clock is ticking on our mobile future. The FCC is an expert agency staffed with first-rate employees who have been working on spectrum allocation for decades – and let me tell you what the career engineers are telling me. Demand for spectrum is rapidly outstripping supply. The networks we have today won’t be able to handle consumer and business needs.

* * *

To avoid this crisis, the National Broadband Plan recommended reallocating 500 megahertz of spectrum for broadband, nearly double the amount that is currently available.

* * *

First, there are some who say that the spectrum crunch is greatly exaggerated – indeed, that there is no crunch coming. They also suggest that there are large blocks of spectrum just lying around – and that some licensees, such as cable and wireless companies, are just sitting on top of, or “hoarding,” unused spectrum that could readily solve that problem. That’s just not true.

* * *

The looming spectrum shortage is real – and it is the alleged hoarding that is illusory.

It is not hoarding if a company paid millions or billions of dollars for spectrum at auction and is complying with the FCC’s build-out rules. There is no evidence of non-compliance. . . . [T]he spectrum crunch will not be solved by the build-out of already allocated spectrum.

All of the evidence suggests that spectrum suitable for mobile broadband is scarce and growing scarcer.  Full stop.

It is troubling that critics–particularly those with little if any business experience–are so certain that even with no obvious source of additional spectrum suitable for LTE coming from the government any time soon, and even with exponential growth in broadband (including mobile) data use, AT&T’s current spectrum holdings are sufficient to satisfy its business plans (and its investors and stockholders).  You’d think AT&T would be delighted to hear this news–what we really need is a shareholder resolution to put Gigi Sohn on the board!

But seriously, put yourself in AT&T’s shoes for a moment.  Its long-term plans require the company to deploy significantly more spectrum than it currently holds in a reasonable time horizon (evengranting Milton’s dubious premise that the company is squatting on scads of unused spectrum–remember that even if AT&T had all the spectrum sitting in its proverbial bank vault it would still be just about a third of the total amount of spectrum we’re predicted to need in just a few years).  Considering the various impediments of net neutrality regulation, congressional politics, presidential politics (think this had anything to do with claims about job losses from the merger, by chance?), reluctant broadcasters, the FCC, state PUCs, environmental groups and probably 10-12 others . . . the chances of being able to obtain the necessary spectrum and cell tower sitings in any other reasonable fashion were perhaps appropriately deemed . . . slim.

With the T-Mobile deal, on the other hand, “AT&T will gain cell sites equivalent to what would have taken on average five years to build without the transaction, and double that in some markets. AT&T’s network density will increase by approximately 30 percent in some of its most populated areas.” (Source).  I just don’t see how this jibes with the claim that the spectrum argument has fallen apart.

But there is a larger, “meta” point to make here, and it’s one that policy scolds and government regulators too often forget.  Even if none of that were true, as long as we don’t know for sure what is optimal and do know the DOJ is both a political organization made up of human beings operating not only under said ignorance but with incentives that don’t necessarily translate into “maximize social welfare” and also devoid of any actual “skin in the game,” I think the basic, simple, time-tested, logical and self-evident error cost principle counsels pretty firmly against intervention.  Humility, not hubris should rule the roost.

And that’s especially true since you know what will happen if the DOJ (or the FCC) succeeds in preventing AT&T from buying T-Mobile?  T-Mobile will still disappear and we’ll still be left with (according to the DOJ’s analysis) the terrifying prospect of only 3 national wireless telecom providers.  Only, in that case, everyone’s going to think a lot harder about investing in future developments that might warrant integration or cooperation or . . . well, the DOJ will challenge anything, so add to the list patent pools, too much success, not enough sharing, etc., etc.  And you wonder why I think this might constitute an assault on innovation?

Now, as for Milton’s specific claims, reminiscent of Public Knowledge’s and Free Press’ talking points, let me quote AT&T’s Public Interest Statement discussing its own particular spectrum holdings:

Because of the high demand for broadband service, AT&T already has had to deploy four carriers (for a total of 40 MHz of spectrum) for UMTS [3G] in some areas—and it will need to deploy more in the near future, even if doing so squeezes its GSM spectrum allocation and compromises GSM service quality . . . .  AT&T expects that, given the relative infancy of the LTE ecosystem and the time needed to migrate subscribers, it will need to continue to allocate spectrum to UMTS services for a substantial number of years—indeed, even longer than AT&T needs to continue allocating spectrum for GSM services.

* * *

AT&T has begun deployment of LTE services using its AWS and 700 MHz spectrum and currently plans to cover more than 250 million people by the end of 2013

* * *

AT&T projects it will need to use its 850 MHz and 1900 MHz spectrum holdings to support GSM and UMTS services for a number of years and, in the meantime, will not be able to re-deploy them for more spectrally efficient LTE services.

* * *

AT&T’s existing WCS spectrum holdings cannot be used for this purpose either, because the technical rules for the WCS band, such as limits on the power spectral density limits, make it infeasible to use that band for broadband service.

In other words, I don’t think AT&T has been (nor could it be, given the FCC’s detailed knowledge on the subject) hiding its spectrum holdings.  Instead, the company has been making quite clear that the spectrum it has is simply insufficient to meet anticipated demand.  And, well, duh!  Anyone who uses AT&T knows its network is overloaded.  Some of that’s because of tower-siting issues, some because it simply didn’t anticipate the extent of demand it would face.  I heard somewhere that no matter how hard they try to account for their perpetual under-accounting, every estimate by every mobile provider of anticipated spectrum needs in the past two decades or so has fallen short.  I’m quite sure that AT&T didn’t anticipate in 2007 that spectrum usage would increase by 8000% (yes, that’s thousand) by 2010.

Moreover, there will always (in any sensible system) be excess capacity at times–as it happens, at (conveniently) the times when spectrum usage is often counted–in order to deal with peak loads.  It is no more sensible to deploy capacity sufficient to handle the maximum load 100% of the time than it is to deploy capacity to handle only the minimum load 100% of the time.  Does that mean the often-unused spectrum is “excess”?  Clearly not.

Moreover (again), not all spectrum is in contiguous blocks sufficient to deploy LTE.  AT&T (at least) claims that is the case with much of its existing spectrum.  Spectrum isn’t simply fungible, and un-nuanced claims that “AT&T has X megahertz of spectrum and it is plenty” are just meaningless.  Again, just because Free Press says otherwise does not make it so.  You can simply discount AT&T’s claims if you like–I’m sure it’s possible they’re just lying; but you should probably be careful whose “information” you believe instead.

But, no, Milton, the spectrum argument did not “fall apart months ago.”  Gigi Sohn, Harold Feld and Sprint just said it did.  There’s a difference.

“Letter-Gate”

As for the infamous letter alleged to show that AT&T could expand LTE service from its previously-planned 80% of the country to the 97% it promises if the merger goes through for significantly less than it would cost to buy T-Mobile:  I don’t know exactly what its import is—but no one outside AT&T and, maybe, the FCC really does, either.  But I think a little sensible skepticism is in order.

First, for those who haven’t read it, the letter says, in relevant part:

The purpose of the meeting was to discuss AT&T’s current LTE deployment plans to reach 80 percent of the U.S. population by the end of 2013…; the estimated [Begin Confidential Information] $3.8 billion [End Confidential Information] in additional capital expenditures to expand LTE coverage from 80 to 97 percent of the U.S. population; and AT&T’s commitment to expand LTE to over 97 percent of the U.S. population as a result of this transaction.

That part, “$3.8 billion,” between the words “Begin Confidential Information” and “End Confidential Information” was supposed to be redacted, but apparently wasn’t when the letter was first posted to the FCC’s website.

While Public Knowledge and other critics of the deal would have you believe that this proves AT&T could roll-out nationwide LTE service for 1/10 of the cost of the T-Mobile deal, it’s basically impossible to tell what this number really means–except it certainly doesn’t mean that.

Claims about its meaning are actually largely content-less; nothing I’ve seen asks (or can possibly answer) whether the number in the letter was full cost, partial cost, annualized cost, based off of what baseline, etc., etc.  Moreover, unless I’m mistaken, nothing in the letter said anything at all about $3.8 billion being used to relieve congestion, meet future demand, increase speeds, reduce latency, expand coverage in urban areas, etc.  It seems to me that it’s referring to “additional” (additional to what?) capital expense to build infrastructure to make it even possible to offer LTE coverage to 97% of the U.S. population following the merger.  AT&T has from the outset said (bragged, more like it, because it’s supposed to bring lots of jobs and that’s what the politicians care about) that it planned to spend an “additional” $8 billion–additional to the $39 billion required to buy T-Mobile, that is–to build out its infrastructure as part of the deal.  But neither this letter nor any of AT&T’s statements (nor anyone with any familiarity with the relevant facts) has ever said it could or would have full-speed, LTE service available and up and running to 97% of the country for $3.8 billion or even $8 billion–or even merely $39 billion.  In fact, AT&T seemed to be saying that it was going to cost at least $47 billion to make that happen (and I can assure you that doesn’t begin to account for all the costs associated with integrating T-Mobile with AT&T once the $39 billion is out the door).

As I’ve alluded to above, deploying LTE service to rural areas is probably not as important for AT&T as increasing its network’s capacity in urban areas. The T-Mobile deal allows AT&T to alleviate the congestion problems experienced by its existing customers in urban areas more quickly than any other option–and because T-Mobile’s network is already up and running, that’s still true even if the federal government was somehow able to make tons of spectrum immediately available.  Moreover, with respect to the $3.8 billion, as I’ve discussed at length above, without T-Mobile’s–or someone’s!–additional spectrum and the miraculous removal of local government impediments to tower construction, pretty much no amount of money would enable AT&T to actually deliver LTE service to 97% of the country.  Is that what it would cost to build the extra pieces of hardware necessary to support such an offering?  That sounds plausible.  But actually deliver it? Hardly.

And just to play this out, let’s say the letter did mean just that — that AT&T could deliver real, fine LTE service to 97% of the country for a mere $3.8 billion direct, marginal outlay, even without T-Mobile.  It is still the case that none of us outsiders knows what such a claim would assume about where the necessary spectrum would come from and what, absent the merger, the effect would be on existing 3G coverage, congestion, pricing, etc., and what the expected ROI for such a project would be.  Elsewhere in the letter its author states that AT&T considered whether making this investment (without the T-Mobile merger) was prudent, and repeatedly rejected it.  In other words, all those armchair CEOs are organizing AT&T’s business and spending its money without the foggiest clue as to what the real consequences would be of doing so–and then claiming that, although, unlike them, actually in possession of the data relevant to such an assessment, AT&T must be lying, and could only justify spending $39 billion to buy T-Mobile as a means of securing its monopoly power.

And I think it’s important to gut check that claim, as well, as it’s what critics claim to fear (The Ma Bell from the Black Lagoon).  Unpacked, it goes something like this:

Given that:

  1.  AT&T is going to spend $39 billion to buy T-Mobile;
  2. It is going to spend $8 billion to build additional infrastructure;
  3. Having bought T-Mobile, it is going to incur some ungodly amount of expense integrating T-Mobile’s assets and employees with its own;
  4. It is going to incur huge, ongoing additional costs to govern a now-larger, more-complex organization;
  5. It is going to continue to be regulated by the FCC and watched carefully by the DOJ and its unofficial consumer watchdog minions;
  6. It will continue to face competition from its current largest and second-largest competitor;
  7. It will continue to face entry threats from the likes of Dish and Lightsquared;
  8. It will continue to face competition from fixed broadband offered by the likes of Comcast and Time Warner;
  9. It will do all this quite publicly, under the watchful eyes of Congress and its union to whom it has made all manner of politically-expedient promises;

 Then it follows that:

  1. Although it can’t muster the gumption to risk $3.8 billion to legitimately (it is claimed) extend full LTE coverage to 97% of the U.S. population, it nevertheless thinks it’s a sure bet that it will be able to recoup all of these expenditures, in this competitive and regulatory environment, by virtue of having thus taken out not its largest, not even its second-largest, but its smallest “national” competitor, and thereby having converted itself into an unfettered monopolist. QED.

The mind boggles.

So.  Back to Milton and his suggestion that I was wrong to claim that the DOJ’s action here is a threat to innovation and progress and his assertion that AT&T’s claims surrounding the benefits of the transaction fail to stand up to scrutiny:  C’mon, Miltons of the world!  Where’s your normally healthy skepticism?  I know you don’t like big infrastructure providers.  I know you’re angry your iPhone isn’t as functional as it is beautiful.  I know capitalists are only slightly more trustworthy than regulators (or is it the other way around?).  But why give in so credulously to the claims of the professional critics?  Isn’t it more likely that the deal’s critics are just blowing smoke here because they don’t like any consolidation?  It doesn’t take much research to understand (to the extent anyone can understand something so complex) the current state of the U.S. broadband market and its discontents–and why something like this merger is a plausible response.  And you don’t have to like, trust, or even stand the sight of any business executive to know that, however stupid or evil, he is still constrained by powerful market forces beyond his ken.  And “Letter-Gate” is just another pseudo-scandal contrived to suit an agenda of aggressive government meddling.

We all ought to be more wary of such claims, less quick to join anyone in condemning big as bad, and far less quick to, implicitly or explicitly, substitute the known depredations of the government for the possible ones of the market without a hell of a lot better evidence to do so.