Archives For at&t

Image by Gerd Altmann from Pixabay

AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention.  The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.

The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded. 

This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.  

The DOJ’s Antitrust Case against the Transaction

The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects. 

This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power. 

Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers. 

Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.

The Fundamental Flaws in the DOJ’s Antitrust Case

The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons. 

False Assumption #1

The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities.  Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.

False Assumption #2

Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services.  Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.

The Antitrust Case for the Transaction

When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.

Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion. 

In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.

The Enduring Virtues of Antitrust Prudence

AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.  

Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.

In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.

The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.

The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.   

The DOJ and 20 state AGs sued Microsoft on May 18, 1998 for unlawful maintenance of its monopoly position in the PC market. The government accused the desktop giant of tying its operating system (Windows) and its web browser (Internet Explorer). Microsoft had indeed become dominant in the PC market by the late 1980s:

Source: Asymco

But after the introduction of smartphones in the mid-2000s, Microsoft’s market share of personal computing units (including PCs, smartphones, and tablets) collapsed:

Source: Benedict Evans

Steven Sinofsy pointed out why this was a classic case of disruptive innovation rather than sustaining innovation: “Google and Microsoft were competitors but only by virtue of being tech companies hiring engineers. After that, almost nothing about what was being made or sold was similar even if things could ultimately be viewed as substitutes. That is literally the definition of innovation.”

Browsers

Microsoft grew to dominance during the PC era by bundling its desktop operating system (Windows) with its productivity software (Office) and modularizing the hardware providers. By 1995, Bill Gates had realized that the internet was the next big thing, calling it “The Internet Tidal Wave” in a famous internal memo. Gates feared that the browser would function as “middleware” and disintermediate Microsoft from its relationship with the end-user. At the time, Netscape Navigator was gaining market share from the first browser to popularize the internet, Mosaic (so-named because it supported a multitude of protocols).

Later that same year, Microsoft released its own browser, Internet Explorer, which would be bundled with its Windows operating system. Internet Explorer soon grew to dominate the market:

Source: Browser Wars

Steven Sinofsky described how the the browser threatened to undermine the Windows platform (emphasis added):

Microsoft saw browsers as a platform threat to Windows. Famously. Browsers though were an app — running everywhere, distributed everywhere. Microsoft chose to compete as though browsing was on par with Windows (i.e., substitutes).

That meant doing things like IBM did — finding holes in distribution where browsers could “sneak” in (e.g., OEM deals) and seeing how to make Microsoft browser work best and only with Windows. Sound familiar? It does to me.

Imagine (some of us did) a world instead where Microsoft would have built a browser that was an app distributed everywhere, running everywhere. That would have been a very different strategy. One that some imagined, but not when Windows was central.

Showing how much your own gravity as a big company can make even obvious steps strategically weak: Microsoft knew browsers had to be cross-platform so it built Internet Explorer for Mac and Unix. Neat. But wait, the main strategic differentiator for Internet Explorer was ActiveX which was clearly Windows only.

So even when trying to compete in a new market the strategy was not going to work technically and customers would immediately know. Either they would ignore the key part of Windows or the key part of x-platform. This is what a big company “master plan” looks like … Active Desktop.

Regulators claimed victory but the loss already happened. But for none of the reasons the writers of history say at least [in my humble opinion]. As a reminder, Microsoft stopped working on Internet Explorer 7 years before Chrome even existed — literally didn’t release a new version for 5+ years.

One of the most important pieces of context for this case is that other browsers were also free for personal use (even if they weren’t bundled with an operating system). At the time, Netscape was free for individuals. Mosaic was free for non-commercial use. Today, Chrome and Firefox are free for all users. Chrome makes money for Google by increasing the value of its ecosystem and serving as a complement for its other products (particularly search). Firefox is able to more than cover its costs by charging Google (and others) to be the default option in its browser. 

By bundling Internet Explorer with Windows for free, Microsoft was arguably charging the market rate. In highly competitive markets, economic theory tells us the price should approach marginal cost — which in software is roughly zero. As James Pethokoukis argued, there are many more reasons to be skeptical about the popular narrative surrounding the Microsoft case. The reasons for doubt range across features, products, and markets, including server operating systems, mobile devices, and search engines. Let’s examine a few of them.

Operating Systems

In a 2007 article for Wired titled “I Blew It on Microsoft,” Lawrence Lessig, a Harvard law professor, admits that his predictions about the future of competition in computer operating systems failed to account for the potential of open-source solutions:

We pro-regulators were making an assumption that history has shown to be completely false: That something as complex as an OS has to be built by a commercial entity. Only crazies imagined that volunteers outside the control of a corporation could successfully create a system over which no one had exclusive command. We knew those crazies. They worked on something called Linux.

According to Web Technology Surveys, as of April 2019, about 70 percent of servers use a Linux-based operating system while the remaining 30 percent use Windows.

Mobile

In 2007, Steve Ballmer believed that Microsoft would be the dominant company in smartphones, saying in an interview with USA Today (emphasis added):

There’s no chance that the iPhone is going to get any significant market share. No chance. It’s a $500 subsidized item. They may make a lot of money. But if you actually take a look at the 1.3 billion phones that get sold, I’d prefer to have our software in 60% or 70% or 80% of them, than I would to have 2% or 3%, which is what Apple might get.

But as Ballmer himself noted in 2013, Microsoft was too committed to the Windows platform to fully pivot its focus to mobile:

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

This is another classic example of the innovator’s dilemma. Microsoft enjoyed high profit margins in its Windows business, which caused the company to underrate the significance of the shift from PCs to smartphones.

Search

To further drive home how dependent Microsoft was on its legacy products, this 2009 WSJ piece notes that the company had a search engine ad service in 2000 and shut it down to avoid cannibalizing its core business:

Nearly a decade ago, early in Mr. Ballmer’s tenure as CEO, Microsoft had its own inner Google and killed it. In 2000, before Google married Web search with advertising, Microsoft had a rudimentary system that did the same, called Keywords, running on the Web. Advertisers began signing up. But Microsoft executives, in part fearing the company would cannibalize other revenue streams, shut it down after two months.

Ben Thompson says we should wonder if the case against Microsoft was a complete waste of everyone’s time (and money): 

In short, to cite Microsoft as a reason for antitrust action against Google in particular is to get history completely wrong: Google would have emerged with or without antitrust action against Microsoft; if anything the real question is whether or not Google’s emergence shows that the Microsoft lawsuit was a waste of time and money.

The most obvious implications of the Microsoft case were negative: (1) PCs became bloated with “crapware” (2) competition in the browser market failed to materialize for many years (3) PCs were less safe because Microsoft couldn’t bundle security software, and (4) some PC users missed out on using first-party software from Microsoft because it couldn’t be bundled with Windows. When weighed against these large costs, the supposed benefits pale in comparison.

Conclusion

In all three cases I’ve discussed in this series — AT&T, IBM, and Microsoft — the real story was not that antitrust enforcers divined the perfect time to break up — or regulate — the dominant tech company. The real story was that slow and then sudden technological change outpaced the organizational inertia of incumbents, permanently displacing the former tech giants from their dominant position in the tech ecosystem. 

The next paradigm shift will be near-impossible to predict. Those who know which technology — and when — it will be would make a lot more money implementing their ideas than they would by playing pundit in the media. Regardless of whether the future winner will be Google, Facebook, Amazon, Apple, Microsoft, or some unknown startup company, antitrust enforcers should remember that the proper goal of public policy in this domain is to maximize total innovation — from firms both large and small. Fetishizing innovation by small companies — and using law enforcement to harass big companies in the hopes for an indirect benefit to competition — will make us all worse off in the long run.

The case against AT&T began in 1974. The government alleged that AT&T had monopolized the market for local and long-distance telephone service as well as telephone equipment. In 1982, the company entered into a consent decree to be broken up into eight pieces (the “Baby Bells” plus the parent company), which was completed in 1984. As a remedy, the government required the company to divest its local operating companies and guarantee equal access to all long-distance and information service providers (ISPs).

Source: Mohanram & Nanda

As the chart above shows, the divestiture broke up AT&T’s national monopoly into seven regional monopolies. In general, modern antitrust analysis focuses on the local product market (because that’s the relevant level for consumer decisions). In hindsight, how did breaking up a national monopoly into seven regional monopolies increase consumer choice? It’s also important to note that, prior to its structural breakup, AT&T was a government-granted monopoly regulated by the FCC. Any antitrust remedy should be analyzed in light of the company’s unique relationship with regulators.

Breaking up one national monopoly into seven regional monopolies is not an effective way to boost innovation. And there are economies of scale and network effects to be gained by owning a national network to serve a national market. In the case of AT&T, those economic incentives are why the Baby Bells forged themselves back together in the decades following the breakup.

Source: WSJ

As Clifford Winston and Robert Crandall noted

Appearing to put Ma Bell back together again may embarrass the trustbusters, but it should not concern American consumers who, in two decades since the breakup, are overwhelmed with competitive options to provide whatever communications services they desire.

Moreover, according to Crandall & Winston (2003), the lower prices following the breakup of AT&T weren’t due to the structural remedy at all (emphasis added):

But on closer examination, the rise in competition and lower long-distance prices are attributable to just one aspect of the 1982 decree; specifically, a requirement that the Bell companies modify their switching facilities to provide equal access to all long-distance carriers. The Federal Communications Commission (FCC) could have promulgated such a requirement without the intervention of the antitrust authorities. For example, the Canadian regulatory commission imposed equal access on its vertically integrated carriers, including Bell Canada, in 1993. As a result, long-distance competition developed much more rapidly in Canada than it had in the United States (Crandall and Hazlett, 2001). The FCC, however, was trying to block MCI from competing in ordinary long-distance services when the AT&T case was filed by the Department of Justice in 1974. In contrast to Canadian and more recent European experience, a lengthy antitrust battle and a disruptive vertical dissolution were required in the U.S. market to offset the FCC’s anti-competitive policies. Thus, antitrust policy did not triumph in this case over restrictive practices by a monopolist to block competition, but instead it overcame anticompetitive policies by a federal regulatory agency.

A quick look at the data on telephone service in the US, EU, and Canada show that the latter two were able to achieve similar reductions in price without breaking up their national providers.

Source: Crandall & Jackson (2011)

The paradigm shift from wireline to wireless

The technological revolution spurred by the transition from wireline telephone service to wireless telephone service shook up the telecommunications industry in the 1990s. The rapid change caught even some of the smartest players by surprise. In 1980, the management consulting firm McKinsey and Co. produced a report for AT&T predicting how large the cellular market might become by the year 2000. Their forecast said that 900,000 cell phones would be in use. The actual number was more than 109 million.

Along with the rise of broadband, the transition to wireless technology led to an explosion in investment. In contrast, the breakup of AT&T in 1984 had no discernible effect on the trend in industry investment:

The lesson for antitrust enforcers is clear: breaking up national monopolies into regional monopolies is no remedy. In certain cases, mandating equal access to critical networks may be warranted. Most of all, technology shocks will upend industries in ways that regulators — and dominant incumbents — fail to predict.

Big Tech continues to be mired in “a very antitrust situation,” as President Trump put it in 2018. Antitrust advocates have zeroed in on Facebook, Google, Apple, and Amazon as their primary targets. These advocates justify their proposals by pointing to the trio of antitrust cases against IBM, AT&T, and Microsoft. Elizabeth Warren, in announcing her plan to break up the tech giants, highlighted the case against Microsoft:

The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge. The story demonstrates why promoting competition is so important: it allows new, groundbreaking companies to grow and thrive — which pushes everyone in the marketplace to offer better products and services.

Tim Wu, a law professor at Columbia University, summarized the overarching narrative recently (emphasis added):

If there is one thing I’d like the tech world to understand better, it is that the trilogy of antitrust suits against IBM, AT&T, and Microsoft played a major role in making the United States the world’s preeminent tech economy.

The IBM-AT&T-Microsoft trilogy of antitrust cases each helped prevent major monopolists from killing small firms and asserting control of the future (of the 80s, 90s, and 00s, respectively).

A list of products and firms that owe at least something to the IBM-AT&T-Microsoft trilogy.

(1) IBM: software as product, Apple, Microsoft, Intel, Seagate, Sun, Dell, Compaq

(2) AT&T: Modems, ISPs, AOL, the Internet and Web industries

(3) Microsoft: Google, Facebook, Amazon

Wu argues that by breaking up the current crop of dominant tech companies, we can sow the seeds for the next one. But this reasoning depends on an incorrect — albeit increasingly popular — reading of the history of the tech industry. Entrepreneurs take purposeful action to produce innovative products for an underserved segment of the market. They also respond to broader technological change by integrating or modularizing different products in their market. This bundling and unbundling is a never-ending process.

Whether the government distracts a dominant incumbent with a failed lawsuit (e.g., IBM), imposes an ineffective conduct remedy (e.g., Microsoft), or breaks up a government-granted national monopoly into regional monopolies (e.g., AT&T), the dynamic nature of competition between tech companies will far outweigh the effects of antitrust enforcers tilting at windmills.

In a series of posts for Truth on the Market, I will review the cases against IBM, AT&T, and Microsoft and discuss what we can learn from them. In this introductory article, I will explain the relevant concepts necessary for understanding the history of market competition in the tech industry.

Competition for the Market

In industries like tech that tend toward “winner takes most,” it’s important to distinguish between competition during the market maturation phase — when no clear winner has emerged and the technology has yet to be widely adopted — and competition after the technology has been diffused in the economy. Benedict Evans recently explained how this cycle works (emphasis added):

When a market is being created, people compete at doing the same thing better. Windows versus Mac. Office versus Lotus. MySpace versus Facebook. Eventually, someone wins, and no-one else can get in. The market opportunity has closed. Be, NeXT/Path were too late. Monopoly!

But then the winner is overtaken by something completely different that makes it irrelevant. PCs overtook mainframes. HTML/LAMP overtook Win32. iOS & Android overtook Windows. Google overtook Microsoft.

Tech antitrust too often wants to insert a competitor to the winning monopolist, when it’s too late. Meanwhile, the monopolist is made irrelevant by something that comes from totally outside the entire conversation and owes nothing to any antitrust interventions.

In antitrust parlance, this is known as competing for the market. By contrast, in more static industries where the playing field doesn’t shift so radically and the market doesn’t tip toward “winner take most,” firms compete within the market. What Benedict Evans refers to as “something completely different” is often a disruptive product.

Disruptive Innovation

As Clay Christensen explains in the Innovator’s Dilemma, a disruptive product is one that is low-quality (but fast-improving), low-margin, and targeted at an underserved segment of the market. Initially, it is rational for the incumbent firms to ignore the disruptive technology and focus on improving their legacy technology to serve high-margin customers. But once the disruptive technology improves to the point it can serve the whole market, it’s too late for the incumbent to switch technologies and catch up. This process looks like overlapping s-curves:

Source: Max Mayblum

We see these S-curves in the technology industry all the time:

Source: Benedict Evans

As Christensen explains in the Innovator’s Solution, consumer needs can be thought of as “jobs-to-be-done.” Early on, when a product is just good enough to get a job done, firms compete on product quality and pursue an integrated strategy — designing, manufacturing, and distributing the product in-house. As the underlying technology improves and the product overshoots the needs of the jobs-to-be-done, products become modular and the primary dimension of competition moves to cost and convenience. As this cycle repeats itself, companies are either bundling different modules together to create more integrated products or unbundling integrated products to create more modular products.

Moore’s Law

Source: Our World in Data

Moore’s Law is the gasoline that gets poured on the fire of technology cycles. Though this “law” is nothing more than the observation that “the number of transistors in a dense integrated circuit doubles about every two years,” the implications for dynamic competition are difficult to overstate. As Bill Gates explained in a 1994 interview with Playboy magazine, Moore’s Law means that computer power is essentially “free” from an engineering perspective:

When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, Why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software.

Exponentially smaller integrated circuits can be combined with new user interfaces and networks to create new computer classes, which themselves represent the opportunity for disruption.

Bell’s Law of Computer Classes

Source: Brad Campbell

A corollary to Moore’s Law, Bell’s law of computer classes predicts that “roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.” Originally formulated in 1972, we have seen this prediction play out in the birth of mainframes, minicomputers, workstations, personal computers, laptops, smartphones, and the Internet of Things.

Understanding these concepts — competition for the market, disruptive innovation, Moore’s Law, and Bell’s Law of Computer Classes — will be crucial for understanding the true effects (or lack thereof) of the antitrust cases against IBM, AT&T, and Microsoft. In my next post, I will look at the DOJ’s (ultimately unsuccessful) 13-year antitrust battle with IBM.

Jonathan B. Baker, Nancy L. Rose, Steven C. Salop, and Fiona Scott Morton don’t like vertical mergers:

Vertical mergers can harm competition, for example, through input foreclosure or customer foreclosure, or by the creation of two-level entry barriers.  … Competitive harms from foreclosure can occur from the merged firm exercising its increased bargaining leverage to raise rivals’ costs or reduce rivals’ access to the market. Vertical mergers also can facilitate coordination by eliminating a disruptive or “maverick” competitor at one vertical level, or through information exchange. Vertical mergers also can eliminate potential competition between the merging parties. Regulated firms can use vertical integration to evade rate regulation. These competitive harms normally occur when at least one of the markets has an oligopoly structure. They can lead to higher prices, lower output, quality reductions, and reduced investment and innovation.

Baker et al. go so far as to argue that any vertical merger in which the downstream firm is subject to price regulation should face a presumption that the merger is anticompetitive.

George Stigler’s well-known article on vertical integration identifies several ways in which vertical integration increases welfare by subverting price controls:

The most important of these other forces, I believe, is the failure of the price system (because of monopoly or public regulation) to clear markets at prices within the limits of the marginal cost of the product (to the buyer if he makes it) and its marginal-value product (to the seller if he further fabricates it). This phenomenon was strikingly illustrated by the spate of vertical mergers in the United States during and immediately after World War II, to circumvent public and private price control and allocations. A regulated price of OA was set (Fig. 2), at which an output of OM was produced. This quantity had a marginal value of OB to buyers, who were rationed on a nonprice basis. The gain to buyers  and sellers combined from a free price of NS was the shaded area, RST, and vertical integration was the simple way of obtaining this gain. This was the rationale of the integration of radio manufacturers into cabinet manufacture, of steel firms into fabricated products, etc.

Stigler was on to something:

  • In 1947, Emerson Radio acquired Plastimold, a maker of plastic radio cabinets. The president of Emerson at the time, Benjamin Abrams, stated “Plastimold is an outstanding producer of molded radio cabinets and gives Emerson an assured source of supply of one of the principal components in the production of radio sets.” [emphasis added] 
  • In the same year, the Congressional Record reported, “Admiral Corp. like other large radio manufacturers has reached out to take over a manufacturer of radio cabinets, the Chicago Cabinet Corp.” 
  • In 1948, the Federal Trade Commission ascribed wartime price controls and shortages as reasons for vertical mergers in the textiles industry as well as distillers’ acquisitions of wineries.

While there may have been some public policy rationale for price controls, it’s clear the controls resulted in shortages and a deadweight loss in many markets. As such, it’s likely that vertical integration to avoid the price controls improved consumer welfare (if only slightly, as in the figure above) and reduced the deadweight loss.

Rather than leading to monopolization, Stigler provides examples in which vertical integration was employed to circumvent monopolization by cartel quotas and/or price-fixing: “Almost every raw-material cartel has had trouble with customers who wish to integrate backward, in order to negate the cartel prices.”

In contrast to Stigler’s analysis, Salop and Daniel P. Culley begin from an implied assumption that where price regulation occurs, the controls are good for society. Thus, they argue avoidance of the price controls are harmful or against the public interest:

Example: The classic example is the pre-divestiture behavior of AT&T, which allegedly used its purchases of equipment at inflated prices from its wholly-owned subsidiary, Western Electric, to artificially increase its costs and so justify higher regulated prices.

This claim is supported by the court in U.S. v. AT&T [emphasis added]:

The Operating Companies have taken these actions, it is said, because the existence of rate of return regulation removed from them the burden of such additional expense, for the extra cost could simply be absorbed into the rate base or expenses, allowing extra profits from the higher prices to flow upstream to Western rather than to its non-Bell competition.

Even so, the pass-through of higher costs seems only a minor concern to the court relative to the “three hats” worn by AT&T and its subsidiaries in the (1) setting of standards, (2) counseling of operating companies in their equipment purchases, and (3) production of equipment for sale to the operating companies [emphasis added]:

The government’s evidence has depicted defendants as sole arbiters of what equipment is suitable for use in the Bell System a role that carries with it a power of subjective judgment that can be and has been used to advance the sale of Western Electric’s products at the expense of the general trade. First, AT&T, in conjunction with Bell Labs and Western Electric, sets the technical standards under which the telephone network operates and the compatibility specifications which equipment must meet. Second, Western Electric and Bell Labs … serve as counselors to the Operating Companies in their procurement decisions, ostensibly helping them to purchase equipment that meets network standards. Third, Western also produces equipment for sale to the Operating Companies in competition with general trade manufacturers.

The upshot of this “wearing of three hats” is, according to the government’s evidence, a rather obviously anticompetitive situation. By setting technical or compatibility standards and by either not communicating these standards to the general trade or changing them in mid-stream, AT&T has the capacity to remove, and has in fact removed, general trade products from serious consideration by the Operating Companies on “network integrity” grounds. By either refusing to evaluate general trade products for the Operating Companies or producing biased or speculative evaluations, AT&T has been able to influence the Operating Companies, which lack independent means to evaluate general trade products, to buy Western. And the in-house production and sale of Western equipment provides AT&T with a powerful incentive to exercise its “approval” power to discriminate against Western’s competitors.

It’s important to keep in mind that rate of return regulation was not thrust upon AT&T, it was a quid pro quo in which state and federal regulators acted to eliminate AT&T/Bell competitors in exchange for price regulation. In a floor speech to Congress in 1921, Rep. William J. Graham declared:

It is believed to be better policy to have one telephone system in a community that serves all the people, even though it may be at an advanced rate, property regulated by State boards or commissions, than it is to have two competing telephone systems.

For purposes of Salop and Culley’s integration-to-evade-price-regulation example, it’s important to keep in mind that AT&T acquired Western Electric in 1882, or about two decades before telephone pricing regulation was contemplated and eight years before the Sherman Antitrust Act. While AT&T may have used vertical integration to take advantage of rate-of-return price regulation, it’s simply not true that AT&T acquired Western Electric to evade price controls.

Salop and Culley provide a more recent example:

Example: Potential evasion of regulation concerns were raised in the FTC’s analysis in 2008 of the Fresenius/Daiichi Sankyo exclusive sub-license for a Daiichi Sankyo pharmaceutical used in Fresenius’ dialysis clinics, which potentially could allow evasion of Medicare pricing regulations.

As with the AT&T example, this example is not about evasion of price controls. Rather it raises concerns about taking advantage of Medicare’s pricing formula. 

At the time of the deal, Medicare reimbursed dialysis clinics based on a drug manufacturer’s Average Sales Price (“ASP”) plus six percent, where ASP was calculated by averaging the prices paid by all customers, including any discounts or rebates. 

The FTC argued by setting an artificially high transfer price of the drug to Fresenius, the ASP would increase, thereby increasing the Medicare reimbursement to all clinics providing the same drug (which not only would increase the costs to Medicare but also would increase income to all clinics providing the drug). Although the FTC claims this would be anticompetitive, the agency does not describe in what ways competition would be harmed.

The FTC introduces an interesting wrinkle in noting that a few years after the deal would have been completed, “substantial changes to the Medicare program relating to dialysis services … would eliminate the regulations that give rise to the concerns created by the proposed transaction.” Specifically, payment for dialysis services would shift from fee-for-service to capitation.

This wrinkle highlights a serious problem with a presumption that any purported evasion of price controls is an antitrust violation. Namely, if the controls go away, so does the antitrust violation. 

Conversely–as Salop and Culley seem to argue with their AT&T example–a vertical merger could be retroactively declared anticompetitive if price controls are imposed after the merger is completed (even decades later and even if the price regulations were never anticipated at the time of the merger). 

It’s one thing to argue that avoiding price regulation runs counter to public interest, but it’s another thing to argue that avoiding price regulation is anticompetitive. Indeed, as Stigler argues, if the price controls stifle competition, then avoidance of the controls may enhance competition. Placing such mergers under heightened scrutiny, such as an anticompetitive presumption, is a solution in search of a problem.

The Wall Street Journal dropped an FCC bombshell last week, although I’m not sure anyone noticed. In an article ostensibly about the possible role that MFNs might play in the Comcast/Time-Warner Cable merger, the Journal noted that

The FCC is encouraging big media companies to offer feedback confidentially on Comcast’s $45-billion offer for Time Warner Cable.

Not only is the FCC holding secret meetings, but it is encouraging Comcast’s and TWC’s commercial rivals to hold confidential meetings and to submit information under seal. This is not a normal part of ex parte proceedings at the FCC.

In the typical proceeding of this sort – known as a “permit-but-disclose proceeding” – ex parte communications are subject to a host of disclosure requirements delineated in 47 CFR 1.1206. But section 1.1200(a) of the Commission’s rules permits the FCC, in its discretion, to modify the applicable procedures if the public interest so requires.

If you dig deeply into the Public Notice seeking comments on the merger, you find a single sentence stating that

Requests for exemptions from the disclosure requirements pursuant to section 1.1204(a)(9) may be made to Jonathan Sallet [the FCC’s General Counsel] or Hillary Burchuk [who heads the transaction review team].

Similar language appears in the AT&T/DirecTV transaction Public Notice.

This leads to the cited rule exempting certain ex parte presentations from the usual disclosure requirements in such proceedings, including the referenced one that exempts ex partes from disclosure when

The presentation is made pursuant to an express or implied promise of confidentiality to protect an individual from the possibility of reprisal, or there is a reasonable expectation that disclosure would endanger the life or physical safety of an individual

So the FCC is inviting “media companies” to offer confidential feedback and to hold secret meetings that the FCC will hold confidential because of “the possibility of reprisal” based on language intended to protect individuals.

Such deviations from the standard permit-but-disclose procedures are extremely rare. As in non-existent. I guess there might be other examples, but I was unable to find a single one in a quick search. And I’m willing to bet that the language inviting confidential communications in the PN hasn’t appeared before – and certainly not in a transaction review.

It is worth pointing out that the language in 1.1204(a)(9) is remarkably similar to language that appears in the Freedom of Information Act. As the DOJ notes regarding that exemption:

Exemption 7(D) provides protection for “records or information compiled for law enforcement purposes [which] could reasonably be expected to disclose the identity of a confidential source… to ensure that “confidential sources are not lost through retaliation against the sources for past disclosure or because of the sources’ fear of future disclosure.”

Surely the fear-of-reprisal rationale for confidentiality makes sense in that context – but here? And invoked to elicit secret meetings and to keep confidential information from corporations instead of individuals, it makes even less sense (and doesn’t even obviously comply with the rule itself). It is not as though – as far as I know – someone approached the Commission with stated fears and requested it implement a procedure for confidentiality in these particular reviews.

Rather, this is the Commission inviting non-transparent process in the midst of a heated, politicized and heavily-scrutinized transaction review.

The optics are astoundingly bad.

Unfortunately, this kind of behavior seems to be par for the course for the current FCC. As Commissioner Pai has noted on more than one occasion, the minority commissioners have been routinely kept in the dark with respect to important matters at the Commission – not coincidentally, in other highly-politicized proceedings.

What’s particularly troubling is that, for all its faults, the FCC’s process is typically extremely open and transparent. Public comments, endless ex parte meetings, regular Open Commission Meetings are all the norm. And this is as it should be. Particularly when it comes to transactions and other regulated conduct for which the regulated entity bears the burden of proving that its behavior does not offend the public interest, it is obviously necessary to have all of the information – to know what might concern the Commission and to make a case respecting those matters.

The kind of arrogance on display of late, and the seeming abuse of process that goes along with it, hearkens back to the heady days of Kevin Martin’s tenure as FCC Chairman – a tenure described as “dysfunctional” and noted for its abuse of process.

All of which should stand as a warning to the vocal, pro-regulatory minority pushing for the FCC to proclaim enormous power to regulate net neutrality – and broadband generally – under Title II. Just as Chairman Martin tried to manipulate diversity rules to accomplish his pet project of cable channel unbundling, some future Chairman will undoubtedly claim authority under Title II to accomplish some other unintended, but politically expedient, objective — and it may not be one the self-proclaimed consumer advocates like, when it happens.

Bad as that risk may be, it is only made more likely by regulatory reviews undertaken in secret. Whatever impelled the Chairman to invite unprecedented secrecy into these transaction reviews, it seems to be of a piece with a deepening politicization and abuse of process at the Commission. It’s both shameful – and deeply worrying.

For those in the DC area interested in telecom regulation, there is another great event opportunity coming up next week.

Join TechFreedom on Thursday, December 19, the 100th anniversary of the Kingsbury Commitment, AT&T’s negotiated settlement of antitrust charges brought by the Department of Justice that gave AT&T a legal monopoly in most of the U.S. in exchange for a commitment to provide universal service.

The Commitment is hailed by many not just as a milestone in the public interest but as the bedrock of U.S. communications policy. Others see the settlement as the cynical exploitation of lofty rhetoric to establish a tightly regulated monopoly — and the beginning of decades of cozy regulatory capture that stifled competition and strangled innovation.

So which was it? More importantly, what can we learn from the seventy year period before the 1984 break-up of AT&T, and the last three decades of efforts to unleash competition? With fewer than a third of Americans relying on traditional telephony and Internet-based competitors increasingly driving competition, what does universal service mean in the digital era? As Congress contemplates overhauling the Communications Act, how can policymakers promote universal service through competition, by promoting innovation and investment? What should a new Kingsbury Commitment look like?

Following a luncheon keynote address by FCC Commissioner Ajit Pai, a diverse panel of experts moderated by TechFreedom President Berin Szoka will explore these issues and more. The panel includes:

  • Harold Feld, Public Knowledge
  • Rob Atkinson, Information Technology & Innovation Foundation
  • Hance Haney, Discovery Institute
  • Jeff Eisenach, American Enterprise Institute
  • Fred Campbell, Former FCC Commissioner

Space is limited so RSVP now if you plan to attend in person. A live stream of the event will be available on this page. You can follow the conversation on Twitter on the #Kingsbury100 hashtag.

When:
Thursday, December 19, 2013
11:30 – 12:00 Registration & lunch
12:00 – 1:45 Event & live stream

The live stream will begin on this page at noon Eastern.

Where:
The Methodist Building
100 Maryland Ave NE
Washington D.C. 20002

Questions?
Email contact@techfreedom.org.

The debates over mobile spectrum aggregation and the auction rules for the FCC’s upcoming incentive auction — like all regulatory rent-seeking — can be farcical. One aspect of the debate in particular is worth highlighting, as it puts into stark relief the tendentiousness of self-interested companies making claims about the public interestedness of their preferred policies: The debate over how and whether to limit the buying and aggregating of lower frequency (in this case 600 MHz) spectrum.

A little technical background is in order. At its most basic, a signal carried in higher frequency spectrum doesn’t travel as well as a signal carried in lower frequency spectrum. The higher the frequency, the closer together cell towers need to be to maintain a good signal.

600MHz is relatively low frequency for wireless communications. In rural areas it is helpful in reducing infrastructure costs for wide area coverage because cell towers can be placed further apart and thus fewer towers must be built. But in cities, population density trumps frequency, and propagation range is essentially irrelevant for infrastructure costs. In other words, it doesn’t matter how far your signal will travel if congestion alleviation demands you build cell towers closer together than even the highest frequency spectrum requires anyway. The optimal — nay, the largest usable — cell radius in urban and suburban areas is considerably smaller than the sort of cell radius that low frequency spectrum allows for.

It is important to note, of course, that signal distance isn’t the only propagation characteristic imparting value to lower frequency spectrum; in particular, it is also valuable even in densely populated settings for its ability to travel through building walls. That said, however, the primary arguments made in favor of spreading the 600 MHz wealth — of effectively subsidizing its purchase by smaller carriers — are rooted in its value in offering more efficient coverage in less-populated areas. Thus the FCC has noted that while there may be significant infrastructure cost savings associated with deploying lower frequency networks in rural areas, this lower frequency spectrum provides little cost advantage in urban or suburban areas (even though, as noted, it has building-penetrating value there).

It is primarily because of these possible rural network cost advantages that certain entities (the Department of Justice, Free Press, the Competitive Carriers Association, e.g.) have proposed that AT&T and Verizon (both of whom have significant lower frequency spectrum holdings) should be restricted from winning “too much” spectrum in the FCC’s upcoming 600 MHz incentive auctions. The argument goes that, in order to ensure national competition — that is, to give other companies financial incentive to build out their networks into rural areas — the auction should be structured to favor Sprint and T-Mobile (both of whose spectrum holdings are mostly in the upper frequency bands) as awardees of this low-frequency spectrum, at commensurately lower cost.

Shockingly, T-Mobile and Sprint are on board with this plan.

So, to recap: 600MHz spectrum confers cost savings when used in rural areas. It has much less effect on infrastructure costs in urban and suburban areas. T-Mobile and Sprint don’t have much of it; AT&T and Verizon have lots. If we want T-Mobile and Sprint to create the competing national networks that the government seems dead set on engineering, we need to put a thumb on the scale in the 600MHz auctions. So they can compete in rural areas. Because that’s where 600MHz spectrum offers cost advantages. In rural areas.

So what does T-Mobile plan to do if it wins the spectrum lottery? Certainly not build in rural areas. As Craig Moffett notes, currently “T-Mobile’s U.S. network is fast…but coverage is not its strong suit, particularly outside of metro areas.” And for the future? T-mobile’s breakneck LTE coverage ramp up since the failed merger with AT&T is expected to top out at 225 million people, or the 71% of consumers living in the most-populated areas (it’s currently somewhere over 200 million). “Although sticking to a smaller network, T-Mobile plans to keep increasing the depth of its LTE coverage” (emphasis added). Depth. That means more bandwidth in high-density areas. It does not mean broader coverage. Obviously.

Sprint, meanwhile, is devoting all of its resources to playing LTE catch-up in the most-populated areas; it isn’t going to waste valuable spectrum resources on expanded rural build out anytime soon.

The kicker is that T-Mobile relies on AT&T’s network to provide its urban and suburban customers with coverage (3G) when they do roam into rural areas, taking advantage of a merger break-up provision that gives it roaming access to AT&T’s 3G network. In other words, T-Mobile’s national network is truly “national” only insofar as it piggybacks on AT&T’s broader coverage. And because AT&T will get the blame for congestion when T-Mobile’s customers roam onto its network, the cost to T-Mobile of hamstringing AT&T’s network is low.

The upshot is that T-Mobile seems not to need, nor does it intend to deploy, lower frequency spectrum to build out its network in less-populated areas. Defenders say that rigging the auction rules to benefit T-Mobile and Sprint will allow them to build out in rural areas to compete with AT&T’s and Verizon’s broader networks. But this is a red herring. They may get the spectrum, but they won’t use it to extend their coverage in rural areas; they’ll use it to add “depth” to their overloaded urban and suburban networks.

But for AT&T the need for additional spectrum is made more acute by the roaming deal, which requires it to serve its own customers and those of T-Mobile.

This makes clear the reason underlying T‑Mobile’s advocacy for rigging the 600 MHz auction – it is simply so that T‑Mobile can acquire this spectrum on the cheap to use in urban and suburban areas, not so that it can deploy a wide rural network. And the beauty of it is that by hamstringing AT&T’s ability to acquire this spectrum, it becomes more expensive for AT&T to serve T‑Mobile’s own customers!

Two birds, one stone: lower your costs, raise your competitor’s costs.

The lesson is this: If we want 600 MHz spectrum to be used efficiently to provide rural LTE service, we should assume that the highest bidder will make the most valuable use of the spectrum. The experience of the relatively unrestricted 700 MHz auction in 2008 confirms this. The purchase of 700 MHz spectrum by AT&T and Verizon led to the US becoming the world leader in LTE. Why mess with success?

[Cross-posted at RedState]

Susan Crawford recently received the OneCommunity Broadband Hero Award for being a “tireless advocate for 21st century high capacity network access.” In her recent debate with Geoffrey Manne and Berin Szoka, she emphasized that there is little competition in broadband or between cable broadband and wireless, asserting that the main players have effectively divided the markets. As a result, she argues (as she did here at 17:29) that broadband and wireless providers “are deciding not to invest in the very expensive infrastructure because they are very happy with the profits they are getting now.” In the debate, Manne countered by pointing to substantial investment and innovation in both the wired and wireless broadband marketplaces, and arguing that this is not something monopolists insulated from competition do. So, who’s right?

The recently released 2013 Progressive Policy Institute Report, U.S. Investment Heroes of 2013: The Companies Betting on America’s Future, has two useful little tables that lend support to Manne’s counterargument.

skitch

The first shows the top 25 investors that are nonfinancial companies, and guess who comes in 1st, 2nd, 10th, 13th, and 17th place? None other than AT&T, Verizon Communications, Comcast, Sprint Nextel, and Time Warner, respectively.

skatch

And when the table is adjusted by removing non-energy companies, the ranks become 1st, 2nd, 5th, 6th, and 9th. In fact, cable and telecom combined to invest over $50.5 billion in 2012.

This high level of investment by supposed monopolists is not a new development. The Progressive Policy Institute’s 2012 Report, Investment Heroes: Who’s Betting on America’s Future? indicates that the same main players have been investing heavily for years. Since 1996, the cable industry has invested over $200 billion into infrastructure alone. These investments have allowed 99.5% of Americans to have access to broadband – via landline, wireless, or both – as of the end of 2012.

There’s more. Not only has there been substantial investment that has increased access, but the speeds of service have increased dramatically over the past few years. The National Broadband Map data show that by the end of 2012:

  • Landline service ≧ 25 megabits per second download available to 81.7% of households, up from 72.9% at the end of 2011 and 58.4% at the end of 2010
  • Landline service ≧ 100 megabits per second download available to 51.5% of households, up from 43.4% at the end of 2011 and only 12.9% at the end of 2010
  • ≧ 1 gigabit per second download available to 6.8% of households, predominantly via fiber
  • Fiber at any speed was available to 22.9% of households, up from 16.8% at the end of 2011 and 14.8% at the end of 2010
  • Landline broadband service at the 3 megabits / 768 kilobits threshold available to 93.4% of households, up from 92.8% at the end of 2011
  • Mobile wireless broadband at the 3 megabits / 768 kilobits threshold available to 94.1% of households , up from 75.8% at the end of 2011
  • Access to mobile wireless broadband providing ≧ 10 megabits per second download has grown to 87%, up from 70.6 percent at the end of 2011 and 8.9 percent at the end of 2010
  • Landline broadband ≧ 10 megabits download was available to 91.1% of households

This leaves only one question: Will the real broadband heroes please stand up?

Over at Forbes Berin Szoka and I have a lengthy piece discussing “10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is.” Crawford has become the unofficial spokesman for a budding campaign to reshape broadband. She sees cable companies monopolizing broadband, charging too much, withholding content and keeping speeds low, all in order to suppress disruptive innovation — and argues for imposing 19th century common carriage regulation on the Internet. Berin and I begin (we expect to contribute much more to this discussion in the future) to explain both why her premises are erroneous and also why her proscription is faulty. Here’s a taste:

Things in the US today are better than Crawford claims. While Crawford claims that broadband is faster and cheaper in other developed countries, her statistics are convincingly disputed. She neglects to mention the significant subsidies used to build out those networks. Crawford’s model is Europe, but as Europeans acknowledge, “beyond 100 Mbps supply will be very difficult and expensive. Western Europe may be forced into a second fibre build out earlier than expected, or will find themselves within the slow lane in 3-5 years time.” And while “blazing fast” broadband might be important for some users, broadband speeds in the US are plenty fast enough to satisfy most users. Consumers are willing to pay for speed, but, apparently, have little interest in paying for the sort of speed Crawford deems essential. This isn’t surprising. As the LSE study cited above notes, “most new activities made possible by broadband are already possible with basic or fast broadband: higher speeds mainly allow the same things to happen faster or with higher quality, while the extra costs of providing higher speeds to everyone are very significant.”

Even if she’s right, she wildly exaggerates the costs. Using a back-of-the-envelope calculation, Crawford claims that slow downloads (compared to other countries) could cost the U.S. $3 trillion/year in lost productivity from wasted time spent “waiting for a link to load or an app to function on your wireless device.” This intentionally sensationalist claim, however, rests on a purely hypothetical average wait time in the U.S. of 30 seconds (vs. 2 seconds in Japan). Whatever the actual numbers might be, her methodology would still be shaky, not least because time spent waiting for laggy content isn’t necessarily simply wasted. And for most of us, the opportunity cost of waiting for Angry Birds to load on our phones isn’t counted in wages — it’s counted in beers or time on the golf course or other leisure activities. These are important, to be sure, but does anyone seriously believe our GDP would grow 20% if only apps were snappier? Meanwhile, actual econometric studies looking at the productivity effects of faster broadband on businesses have found that higher broadband speeds are not associated with higher productivity.

* * *

So how do we guard against the possibility of consumer harm without making things worse? For us, it’s a mix of promoting both competition and a smarter, subtler role for government.

Despite Crawford’s assertion that the DOJ should have blocked the Comcast-NBCU merger, antitrust and consumer protection laws do operate to constrain corporate conduct, not only through government enforcement but also private rights of action. Antitrust works best in the background, discouraging harmful conduct without anyone ever suing. The same is true for using consumer protection law to punish deception and truly harmful practices (e.g., misleading billing or overstating speeds).

A range of regulatory reforms would also go a long way toward promoting competition. Most importantly, reform local franchising so competitors like Google Fiber can build their own networks. That means giving them “open access” not to existing networks but to the public rights of way under streets. Instead of requiring that franchisees build out to an entire franchise area—which often makes both new entry and service upgrades unprofitable—remove build-out requirements and craft smart subsidies to encourage competition to deliver high-quality universal service, and to deliver superfast broadband to the customers who want it. Rather than controlling prices, offer broadband vouchers to those that can’t afford it. Encourage telcos to build wireline competitors to cable by transitioning their existing telephone networks to all-IP networks, as we’ve urged the FCC to do (here and here). Let wireless reach its potential by opening up spectrum and discouraging municipalities from blocking tower construction. Clear the deadwood of rules that protect incumbents in the video marketplace—a reform with broad bipartisan appeal.

In short, there’s a lot of ground between “do nothing” and “regulate broadband like electricity—or railroads.” Crawford’s arguments simply don’t justify imposing 19th century common carriage regulation on the Internet. But that doesn’t leave us powerless to correct practices that truly harm consumers, should they actually arise.

Read the whole thing here.

by Larry Downes and Geoffrey A. Manne

Now that the election is over, the Federal Communications Commission is returning to the important but painfully slow business of updating its spectrum management policies for the 21st century. That includes a process the agency started in September to formalize its dangerously unstructured role in reviewing mergers and other large transactions in the communications industry.

This followed growing concern about “mission creep” at the FCC, which, in deals such as those between Comcast and NBCUniversal, AT&T and T-Mobile USA, and Verizon Wireless and SpectrumCo, has repeatedly been caught with its thumb on the scales of what is supposed to be a balance between private markets and what the Communications Act refers to as the “public interest.” Continue Reading…

At today’s Open Commission Meeting, the FCC is set to consider two apparently forthcoming Notices of Proposed Rulemaking that will shape the mobile broadband sector for years to come.  It’s not hyperbole to say that the FCC’s approach to the two issues at hand — the design of spectrum auctions and the definition of the FCC’s spectrum screen — can make or break wireless broadband in this country.  The FCC stands at a crossroads with respect to its role in this future, and it’s not clear that it will choose wisely.

Chairman Genachowski has recently jumped on the “psychology of abundance” bandwagon, suggesting that the firms that provide broadband service must (be forced by the FCC to) act as if spectrum and bandwidth were abundant (they aren’t), and not to engage in activities that are sensible responses to broadband scarcity.  According to Genachowski, “Anything that depresses broadband usage is something that we need to be really concerned about. . . . We should all be concerned with anything that is incompatible with the psychology of abundance.”  This is the idea — popularized by non-economists and ideologues like Susan Crawford — that we should require networks to act as if we have “abundant” capacity, and enact regulations and restraints that prevent network operators from responding to actual scarcity with business structures, rational pricing or usage rules that could in any way deviate from this imaginary Nirvana.

This is rhetorical bunk.  The culprit here, if there is one, isn’t the firms that plow billions into expanding scarce capacity to meet abundant demand and struggle to manage their networks to maximize capacity within these constraints (dubbed “investment heroes” by the more reasonable lefties at the Progressive Policy Institute).  Firms act like there is scarcity because there is — and the FCC is largely to blame.  What we should be concerned about is not the psychology of abundance, but rather the sources of actual scarcity.

The FCC faces a stark choice—starting with tomorrow’s meeting.  The Commission can choose to continue to be the agency that micromanages scarcity as an activist intervenor in the market — screening-out some market participants as “too big,” and scrutinizing every scarcity-induced merger, deal, spectrum transfer, usage cap, pricing decision and content restriction for how much it deviates from a fanciful ideal.  Or it can position itself as the creator of true abundance and simply open the spectrum spigot that it has negligently blocked for years, delivering more bandwidth into the hands of everyone who wants it.

If the FCC chooses the latter course — if it designs effective auctions that attract sellers, permitting participation by all willing buyers — everyone benefits.  Firms won’t act like there is scarcity if there is no scarcity.  Investment in networks and the technology that maximizes their capacity will continue as long as those investments are secure and firms are allowed to realize a return — not lambasted every time they try to do so.

If, instead, the Commission remains in thrall to self-proclaimed consumer advocates (in truth, regulatory activists) who believe against all evidence that they can and should design industry’s structure (“big is bad!”) and second-guess every business decision (“psychology of abundance!”), everyone loses (except the activists, I suppose).  Firms won’t stop acting like there’s scarcity until there is no scarcity.  And investment will take a backseat to unpopular network management decisions that represent the only sensible responses to uncertain, over-regulated market conditions.