Archives For

Last week, the FTC hired outside litigator Beth Wilkinson to lead an investigation into Google’s conduct, which some in the press have interpreted as a grave sign for the search company. The FTC is reportedly interested in pursuing Google under Section 5 of the FTC Act, which prohibits a firm from engaging in “unfair methods of competition.” Along with Bob Litan, who served as Deputy Assistant Attorney General in the Antitrust Division during the Microsoft investigation, I have penned a short paper on the FTC’s seemingly unorthodox Section 5 case against Google. (Disclosure: This paper was commissioned by Google.)

Litan and I explore a few possible theories of harm under a hypothetical Section 5 case and find them wanting, including (1) claims that specialized search results (such as flight, shopping or map results) “unfairly” harm the independent specialized search websites like Kayak (travel) or MapQuest (mapping and directions), or (2) assertions that Google allegedly has “deceived” users or websites by seemingly reneging on pledges not to favor its own sites. For the sake of brevity, I focus on the FTC’s potential deception theory here, and leave it to interested reader to pursue the “unfairness” theory in the paper.

Deception of Users

The alleged bases of Google’s alleged deception are generic statements that Google made, either in its initial public offering (IPO) or on its website, about Google’s attitude toward users leaving the site. The provision of a lawful service, specialized search, launched several years after the IPO statement certainly cannot be deceptive. To conclude that it is, and more importantly, to prevent the company from offering innovations in search would establish a precedent that would surely punish innovation throughout the rest of the economy.

As for the mission statement that the company wants users to get off the site as quickly as possible, it is just that, a mission statement. Users do not go to the mission statement when they search; they go to the Google site itself. Users cannot possibly be harmed even if this particular statement in the company’s mission were untrue. Moreover, if the problem lies in that statement, then any remedy should be directed at amending that statement. There is no justification for the Commission to hamper Google’s specialized search services themselves or to dictate where Google must display them.

Deception of Rivals

An alternative theory suggests that Google deceived its rivals, reducing innovation among independent websites. In a February 2012 paper delivered to the OECD, Tim Wu explained that competition law can be used to “increase the costs of exclusion,” which if successful, would promote innovation among application providers. Wu argued that “oversight of platforms is conceptually similar” to oversight of standard-setting organizations (SSOs). He offers a hypothetical case in which a platform owner “broadly represents to the world that he maintains an open and transparent innovation platform,” gains a monopoly position based on those representations, and then begins to exclude applications “that might themselves serve as platforms.” Once the industry has committed to a private platform, Wu argues, the platform owner “earns oversight of its practices from that point onward.”

So has Google earned itself oversight due to its alleged deception? Google is not perceived by web designers as providing a platform for all companies to have equal footing. Websites’ rankings in Google’s search results vary tremendously over time; no publisher could reasonably rely on any particular ranking on Google. To the contrary, websites want their presence to be known to any and all search engines. That specialized search sites did not base their business plans on Google’s commitment to openness is what distinguishes Google’s platform from Microsoft’s platform in the 1990s. To Wu’s credit, he does not mention Google in this section of the paper; the only platforms mentioned are those of Apple, Android, and Microsoft.

It is even more of a stretch to analogize Google’s conduct to that in the FTC’s Rambus case. Unlike websites that do not depend on a Google “standard”–the website can be accessed by users from any search engine, or through direct navigation–computer memory chips must be compatible with a variety of computers, which requires that chip producers develop a common set of standards for performance and interoperability. According to the FTC, Rambus exploited this reliance by, among other things, not disclosing to chip makers that it had additional divisional patent applications in process. That specialized search sites did not make “irreversible technological” investments based on Google’s commitment to a common standard is what distinguishes Google’s platform from SSOs.

The Freedom to Innovate

A change in a business model cannot be a legitimate basis for a Section 5 case because a firm cannot be expected to know how the world is going to unfold at its inception. A lot can change in a decade. Consumers’ taste for the product can change. Technology can change. Business models are required to adapt to such change; else they die. There should be no requirement that once a firm writes a mission statement, it be held to that statement forever. What if Google failed to anticipate the role of specialized search in 2004? Presumably, Google failed to anticipate a lot of things, but that should not be the basis for denying its entry into ancillary services or expanding its core offerings. As John Maynard Keynes famously replied to a criticism during the Great Depression of having changed his position on monetary policy: “When the facts change, I change my mind. What do you do sir?” If Google exposes itself to increased oversight for merely changing its mind, then other technology firms might think twice before innovating. And that would be a horrible consequence to the FTC’s exploration of alternative antitrust theories.

Last month, the Federal Reserve released a study, titled “The U.S. Housing Market: Current Conditions and Policy Considerations,” which offers prescriptions on how to cure the housing mess. Given the importance of this issue to the nation’s economic wellbeing—a large portion of our assets are tied up in real estate, and the associated housing-wealth effects are large—I am surprised how little attention the housing market is getting in the Republican debates. Debate sponsors, presumably driven by ratings, seem more interested in Newt’s love life and Mitt’s finances than in economic policy.

The concluding comments of the Fed study are worth repeating here:

The significant tightening in household access to mortgage credit likely reflects not only a correction of the unsound underwriting practices that emerged over the past decade, but also a more substantial shift in lenders’ and the GSEs’ willingness to bear risk. Indeed, if the currently prevailing standards had been in place during the past few decades, a larger portion of the nation’s housing stock probably would have been designed and built for rental, rather than owner occupancy. Thus, the challenge for policymakers is to find ways to help reconcile the existing size and mix of the housing stock and the current environment for housing finance. Fundamentally, such measures involve adapting the existing housing stock to the prevailing tight mortgage lending conditions–for example, devising policies that could help facilitate the conversion of foreclosed properties to rental properties—or supporting a housing finance regime that is less restrictive than today’s, while steering clear of the lax standards that emerged during the last decade. Absent any policies to help bridge this gap, the adjustment process will take longer and incur more deadweight losses, pushing house prices lower and thereby prolonging the downward pressure on the wealth of current homeowners and the resultant drag on the economy at large.

Translation: If we can expedite the transition of our housing stock, we can turn this economy around faster. The study offers several policy prescriptions, including facilitating the conversion of foreclosed properties to rental properties, minimizing unnecessary foreclosures through the use of a broad menu of types of loan modifications, and supporting policies that facilitate deeds-in-lieu of foreclosure or short sales.

On page 14 (of a 26 page report), the study offers yet another approach: land banks, which are described as “public or nonprofit entities created to manage properties that are not dealt with adequately through the private market.” Before the free-market crowd gets worked up, they should recognize that a string of abandoned homes generates a negative externality in a neighborhood, which is precisely the occasion for intervention. Properties acquired by land banks may be rehabilitated as rental units or demolished, as market conditions dictate, which could harness deflationary forces caused by excess supply and neighborhood blight.

My only nit with the section is that the Fed limits the land-bank option to “low-value properties,” which they seem to define as properties below $20,000. This is too timid: If land banks are successful at revitalizing neighborhoods—imagine a park in every neighborhood—then why limit the policy to homes that are effectively worthless? Despite this limitation, the Fed calls for increased funding and technical assistance to existing land banks and for creating a national land bank program.

Kudos to the Fed for taking such a bold stand! If only we could get the debate moderators to ask candidates how to solve the housing mess.

Economists recognize that the source of sustainable, private-sector jobs is investment. Due to measurement problems with investment data, however, it is sometimes easier to link a byproduct of investment—namely, adoption of the technology made possible by the investment—to job creation. This is precisely what economists Rob Shapiro and Kevin Hassett have done in their new study on the employment effects of wireless investments.

Shapiro and Hassett credit the nation’s upgrade of wireless broadband infrastructure from second-generation (2G) to third-generation (3G) technology with generating over one million jobs between 2006 and 2011. To demonstrate that adoption of 3G handsets “caused” job creation in an econometric sense, the authors studied the relationship between the change in a state’s employment and the cumulative penetration of cell phone technologies. According to their econometric model, every 10 percentage point increase in the penetration of a new generation of cell phones in a given quarter causes between a 0.05 and 0.07 percentage point increase in employment growth in the following three quarters.

How reasonable are these results? In 2010, Bob Crandall and I estimated that investment in second-generation broadband infrastructure of roughly $30 billion per year, including wireless infrastructure, sustained roughly 500,000 jobs between 2006 and 2009. We further estimated that spillover effects in other industries that exploit broadband technology could sustain another 500,000, bringing the total job effect close to one million jobs per year. Although Shapiro’s and Hassett’s estimates (based on wireless deployment only) significantly exceed ours (based on all broadband deployment), their estimate is not outside the realm of the possibility.

Crandall, Lehr, and Litan (2007) also conducted a regression analysis using state-level broadband penetration data from 2003-2005 to estimate job effects. They projected that for every one percentage point increase in broadband penetration in a state, employment increases by 0.2 to 0.3 percent per year. On a national level, their results imply an increase of approximately 300,000 jobs per year per one-percentage-point increase in broadband penetration. Once again, Shapiro’s and Hassett’s estimates are consistent with this prior work.

Scholars may differ on the precise way to measure the employment effects, but that debate misses the more important policy point—namely, that broadband technologies generally and wireless broadband in particular have become a vital engine of job creation. The observed correlation between wireless adoption and employment is not accidental: To induce customers to adopt the coolest handset, firms must continuously invest in the next generation of network and device technologies. And these costly investments sustain jobs.

Moreover, contrary to the FCC’s opinion in its 15th annual wireless competition report, private industry’s sustained and widespread investment in new wireless broadband technologies is consistent with the sector being intensely competitive. Industry critics have decried such evidence, arguing instead that the industry is in the death grip of monopolists. Although a monopolist may have an incentive to innovate to protect against a future threat, firms in a competitive industry have incentives to invest and innovate as a way to protect against losing market share today.

Policymakers should ask themselves this question: Why would wireless carriers continually invest billions of dollars on next-generation technologies if they could sit back and exploit their alleged monopoly rents? Experience and common sense tell us that in fact, companies in this space are not behaving like monopolists. Rather, wireless providers of all stripes are desperately trying to distinguish themselves from their rivals. Wireless tablets and phones are driving demand for more and faster wireless broadband, while spectrum-devouring apps like Siri have captured the imagination of millions. The wireless arms race is on, and the U.S. economy stands to benefit directly as wireless companies try to outmaneuver one another with the fastest networks, coolest devices, and deepest array of killer apps.

Regulated firms and their Washington lawyers study agency reports and public statements carefully to figure out the rules of the road; the clearer the rules, the easier it is for regulated firms to understand how the rules affect their businesses and to plan accordingly. So long as the regulator and the regulated firm are on the same page, resources will be put to the most valuable use allowed under the regulations.

When a regulator’s signals get blurry, resources may be squandered. For starters, take the FCC’s annual wireless competition report and the Commission’s pronouncements on spectrum policy. For several years, the competition report cited a trend of falling prices and increasing entry as evidence of robust competition while at the same time noting that industry concentration was slowly rising.

In an abrupt turnaround, the FCC’s 2010 competition report cited the slow but steady increase in concentration as evidence of a lack of competition despite the continued decline in prices and increase in new-firm entry. In other words, in the face of the same industry trends, the agency’s conclusion on competition reversed. The increased weight placed on concentration also seemed at odds with the DOJ’s revised Merger Guidelines, which deemphasized concentration in favor of direct evidence of market power.

At last week’s Consumer Electronics tradeshow, the FCC chairman suggested that the competition report’s objective was not to provide guidance on Commission policy but instead “to lay out data around the degrees of competition in the different sectors.” So much for clearing up the ambiguity. Industry participants expect more than a Wikipedia entry on something so weighty as an annual report to Congress regarding one of the economy’s most critical sectors.

The agency’s signals on spectrum policy are even murkier. On one hand, during the last few years, the current FCC has been calling for more frequencies to be made available to support and grow wireless broadband networks. The FCC has also been publicly supporting voluntary incentive auctions—a market-based tool to compensate existing spectrum licensees for returning their licenses—as the best way to reallocate unused broadcast spectrum to wireless broadband. However, in a confusing set of remarks at the same tradeshow, the FCC now seems to be saying that it only wants to see more spectrum made available if the agency can dictate who gets the spectrum and how they can use it. The very discretion that the FCC now seeks will invite rent-seeking behavior among auction contestants, who will lobby the agency to slant the rules in a way that limits competition and advances their narrow interests; better to immunize the FCC from this lobbying barrage by limiting its discretion.

The agency’s inconsistent and confusing analysis and statements in these two critical policy arenas—wireless competition and spectrum policy—created the perfect storm last year when AT&T sought to acquire T-Mobile. AT&T argued that it wanted to purchase T-Mobile and use its spectrum to augment existing spectrum and infrastructure resources, consistent with the agency’s acknowledgement that wireless carriers needed more spectrum to support surging demand for bandwidth-intensive wireless services such as streaming video. Had AT&T understood the FCC’s intentions, it would not have offered a four-billion-dollar breakup fee to T-Mobile’s parent; these resources could have been put to better use.

The singular objective that should drive the Commission in all matters wireless is getting spectrum into the hands of firms that value it the most. The last 20 years of wireless-industry growth has proven that those who value spectrum the most put it to use most quickly. To commit to this course of action, the agency needs to more clearly and consistently signal its regulatory intentions. If the agency wants to spur competition, it should support Congressional efforts to authorize incentive auctions without restrictions. It also needs to let the evidence of lower prices, growing adoption, and increasing innovation inform its understanding of the state of competition.

For years the public has been clamoring for a playoff system to crown a champion in college football. Yet the geniuses at the BCS stubbornly defended—at least until now—their computer-knows-best system for inviting the two most worthy teams. By injecting doubt over the legitimacy of its invitees, the current system diminishes the meaning of the BCS title game, as evidenced by the abysmal Nielsen ratings for Monday night’s Alabama-LSU game (only 13.8 percent of U.S. television households tuned in to watch the television equivalent of paint drying) and last year’s Auburn-Oregon title game (15.3 percent). By comparison, the title game between Alabama and Texas just two years ago drew 17.2 percent of U.S. households; if this were a publicly traded firm, its shares would be falling fast.

Even worse, the current system diminishes the importance of the other BCS games. Besides alumni, who wants to watch an exhibition game between Oregon and Wisconsin (this year’s Rose Bowl) if the winner cannot advance to the next round? This year’s Rose Bowl drew a meager 9.9 percent of U.S. television households, down 15 percent from last year’s Rose Bowl between TCU and Wisconsin. And last year’s Rose Bowl drew 11.3 percent, down 15 percent from the prior year. Can anyone spot a pattern?

In contrast, the first round of the NFL playoffs this year drew massive audiences. For example, NBC’s coverage of the Saints-Lions earned a 19.3 overnight rating, the third-best overnight for a Wild Card Saturday game since the 1999 playoff season. Along with 42.4 million of my closest friends, I found myself compelled to watch the Broncos-Steelers Wild Card game (25.9 rating), not because I care about either team, but because the investment of my time would pay off in even greater happiness next week.

It is a tragedy that the BCS would run these valuable assets into the ground. Imagine the excitement of a Cinderella team like Baylor, Boise State, or TCU sneaking into the championship. Organized as a playoff, the Rose Bowl (or any BCS non-title game) would experience a significant lift in ratings, along the lines of the lift enjoyed by NFL post-season games relative to NFL regular-season games. To be fair, the profit function of the BCS conferences is presumably much more complicated than “maximize the value of the television revenues for the BCS games.” But these television revenues must be a critical component of their joint profits. Which begs the question: Why would the BCS systematically err when so much money is at stake?

In yesterday’s Washington Post, Health and Human Services Secretary Kathleen Sebelius makes an impassioned plea for skeptics to reconsider the Affordable Care Act. Secretary Sebelius argues that the Act will bring down health care costs by, among other things, assisting those who cannot afford health insurance coverage. Although expanding health insurance coverage is a worthy goal, bringing more folks into the health care system could result in higher prices for health care services. The housing market provides a nice example: although subsidized mortgage rates allowed more people to own homes, more buyers eventually meant higher home prices.

Secretary Sebelius reminds us of the broth of new regulations designed to constrain the worst impulses of insurance providers, including requiring providers to justify premium increases above 10 percent in an online forum; to spend at least 80 percent of premium dollars on health care as opposed to salary or advertising; to accept applicants with preexisting conditions; and to charge zero copays for so-called preventative services. This level of micro-management seems excessive, even by regulated-industry standards.

Given the raging debate over the constitutionality of the Act’s requirement that everyone buy health insurance, the other provisions of the Act have received relatively little attention. To an economist who believes in the efficacy of prices to allocate scarce resources in an economy, the zero-copay rule is perhaps the most offensive provision of the Act. Even for preventative services, a positive copay ensures that users do not abuse their privileges. For any doubters (who live or work in major cities), look out the window during rush hour to see what happens when an activity (using a road) is priced at zero. It is not clear that the increase in demand for preventative services will be offset by the promised decrease in demand for treatment of chronic ailments. Moreover, providers are likely to react to a zero-copay rule by raising deductibles; these terms are highly interrelated. Finally, there is no limit to what constitutes preventative medicine; some men do get breast cancer, but not enough to justify free mammograms for all men.

This is not the first time the Administration has imposed a zero-price rule. The chairman of the Federal Communications Commission, who was carefully screened by President Obama on the issue of net neutrality, adopted the Open Internet Order, which banned an Internet service provider (such as AT&T) from charging a price to an Internet content provider (like Sony) in exchange for speedier delivery. Under the Commission’s rationale, if some websites could not afford the surcharge for higher quality of service, then no one should.

It seems that prices for “critical” services such as preventative medicine and Internet access are evil because they exclude certain segments of the economy. To be fair, under certain conditions such as information asymmetries, externalities, and adverse selection (common in health insurance markets) market-based prices may result in too little or too much consumption relative to the socially optimal level. But the attacks on the price mechanism by these two pieces of regulation do not seem to be grounded in those traditional market-failure arguments. Without a limiting principle, one could oppose prices for just about any good or service, as there will always be someone who cannot afford it. Better to leave prices in place (and subsidize those who cannot afford the “critical” service) than to ban pricing altogether. In contrast to a zero-price rule, the cost of the subsidy is transparent to taxpayers.

AT&T/T-Mobile RIP

Hal Singer —  20 December 2011

Yesterday, AT&T announced it was halting its plan to acquire T-Mobile. Presumably AT&T did not think it could prevail in defending the merger in two places simultaneously—one before a federal district court judge (to defend against the DOJ’s case) and another before an administrative law judge (to defend against the FCC’s case). Staff at both agencies appeared intractable in their opposition. AT&T’s option of defending cases sequentially, first against the DOJ then against the FCC, was removed by the DOJ’s threat to withdraw its complaint unless AT&T re-submit its merger application to the FCC. The FCC rarely makes a major license-transfer decision without the green light from the DOJ on antitrust issues. Instead, the FCC typically piles on conditions to transfer value created by the merger to complaining parties after the DOJ has approved a merger. Prevailing first against the DOJ would have rendered the FCC’s opposition moot.

The FCC’s case against the merger was weak. I have already blogged about the FCC’s Staff Report, but one point is worth revisiting as we digest the fate of T-Mobile’s spectrum: The FCC placed a huge bet on the cable companies’ breathing life into a floundering firm. In particular, the Staff Report cited a prospective wholesale arrangement between Cablevision and T-Mobile as evidence that some alternative suitor—whose name did not rhyme with “Amy and tea” or “her eyes on”—could preserve the number of actual competitors in the marketplace. However, within days of the FCC’s placing its bet on the cable industry, Verizon announced its intention to gobble up the spectrum of Comcast, Time Warner, and Bright House. Over the weekend, Verizon declared its purchase of spectrum from Cox.  To be fair, Verizon’s acquisition does not preclude T-Mobile and Cablevision from entering into some spectrum-sharing arrangement; let’s not hold our breath.

This episode highlights the danger of regulators’ industrial engineering: The wireless marketplace is so dynamic that a seemingly reasonable bet by an agency was revealed to be a stunning loser in just a matter of days. By virtue of AT&T’s “winning the auction” for T-Mobile’s assets—Deutsche Telekom, T-Mobile’s parent, is leaving the American wireless industry one way or another—the marketplace selected the most efficient suitor for T-Mobile. If the cable companies or some other suitor were interested in entering the wireless industry, then presumably they would have stepped forward when T-Mobile was still on the open market.

Can you blame the cable companies for their lack of interest in wireless? Who wants to enter an industry with declining prices that requires billions in network investment that cannot be re-deployed elsewhere in the event of a loss? When asked what Deutsche Telekom plans to do with its U.S. assets now that the AT&T deal has unraveled, a company spokesman said: “There’s no Plan B. We’re back at the starting point.” Such gloom is hard to reconcile with the FCC’s belief that a viable suitor is lurking in the background.

Short of Google’s or DISH Network’s or some non-communications giant’s swooping down in the coming days, the net costs of the FCC’s risky intervention will begin to mount. The ostensible benefits of intervention were to prevent a price increase and to preserve the cable companies’ play on T-Mobile’s spectrum. The second benefit has evaporated and the first benefit was never proven in the FCC’s Staff Report. On the cost side of the ledger, AT&T’s customers will soon experience increased congestion as their demand for wireless video and other bandwidth-intensive applications outstrip the capacity of AT&T’s network. And T-Mobile’s customers will never get to experience 4G in all its glory. (Deutsche Telekom has little incentive to upgrade a network it plans to sell.) The FCC has certainly capped AT&T’s spectrum holdings in place, but has the agency advanced the public interest?

Geoff Manne’s blog on the FCC’s Staff Analysis and Findings (“Staff Report”) has inspired me to come up with a top ten list. The Staff Report relies heavily on concentration indices to make inferences about a carrier’s pricing power, even though direct evidence of pricing power is available (and points in the opposite direction). In this post, I have chosen ten lines from the Staff Report that reveal the weakness of the economic analysis and suggest a potential regulatory agenda. It is clear that the staff want T-Mobile’s spectrum to land in the hands of a suitor other than AT&T—the government apparently can allocate scare resources better than the market—and that the report’s authors define the public interest as locking AT&T’s spectrum holdings in place.

Top Ten Lines in the Staff Report

  1. “While there is more to establishing likely competitive harms than measuring market and spectrum concentration, these [concentration] metrics shed light on the scope and scale of the competition that would be eliminated by the proposed transaction.” Staff Report, para. 17. An important admission. The staff is signaling that the merger analysis cannot begin and end with a concentration analysis. The Staff Report fails to explain, however, what more is needed to establish anticompetitive effects. The answer is direct evidence that the merging firms significantly constrain each other’s ability to raise prices. And the Staff Report fails on this score.
  2. “Second, the proposed transaction would result in the elimination of a nationwide rival that has played the role of a disruptive competitive force in the marketplace.” Staff Report, para. 17. Setting aside the weakness of the claim that T-Mobile—the only major carrier to lose subscribers in 2010—is a disruptive force, the Staff Report fails to explain how T-Mobile’s supposed disruption has anything to do with the instant merger. Is the staff saying that T-Mobile is so disruptive and so irreplaceable that any merger eliminating T-Mobile would be anticompetitive? The Staff Report’s “disruptive” evidence, chronicled from paragraphs 21 through 28, could be regurgitated in a Sprint/T-Mobile merger review or in a Leap/T-Mobile review. Would those mergers be presumptively anticompetitive as well? Critically, the evidence of T-Mobile being a disruptive force does not speak to the issue of whether T-Mobile constrains the price of AT&T.
  3. “Market concentration statistics of the type generated by this transaction commonly indicate that buyers would have fewer viable choices, making both unilateral and coordinated competitive effects more likely.” Whether concentration statistics indicate anticompetitive effects in general or in a hypothetical U.S. market is beside the point. What matters here is whether concentration statistics are a good predictor of higher wireless prices. And the answer is a resounding no. As the former chief economist of the FCC has noted in a forthcoming paper, wireless concentration is negatively correlated with wireless prices. At a minimum, the FCC should note this finding—the abstract has been viewed nearly 1500 times and the full paper has been downloaded 250 times from SSRN—and provide the proper caveats to any concentration analysis they conduct regarding the wireless industry.
  4. “Although T-Mobile faces challenges as the industry develops and responds to the increasing data demands of consumers, the record does not support the bleak short-term outlook for T-Mobile that AT&T has portrayed in its submissions.” Staff Report, para. 22. To say that T-Mobile faces challenges is an understatement: T-Mobile is uniquely losing subscribers and its German owners want out of the U.S. market. How can the agency better predict the short-term outlook for T-Mobile? Is there a crystal ball in the FCC’s basement? If the short-term were as rosy as the agency suggests, then why would T-Mobile’s owners—who presumably have the best vantage on the firm’s future performance—seek a buyer right before a turnaround in performance?
  5. “These initiatives [announced by T-Mobile’s CEO before the transaction] might have strengthened T-Mobile’s disruptive role in the industry, for example by highlighting its unlimited data plans, and using them to define its brand and differentiate it from rival brands that have adopted tiered pricing.” Staff Report, para. 23. How can T-Mobile go from a disruptive force to an even stronger disruptive force? You can’t be half-pregnant, and you can’t be half-disruptive. It seems that the Staff Report is now saying that T-Mobile would have been disruptive but for the transaction, which caused T-Mobile to abandon these really stupendous plans. According to footnote 61, these initiatives were announced in a T-Mobile press release on January 20, 2011. But the agency doesn’t bother to see how the market reacted to these initiatives. It is curious that the agency would stake its disruptive claims on something so speculative.
  6. “T-Mobile has also repeatedly acted as a pricing innovator over the past few years, introducing offers such as . . .  T-Mobile introduced a simple online tool that allows a subscriber to manage all services on a multi-line family plan, for example, setting and changing the limits for minutes, messages and downloads (e.g., games, ring tones) on a child’s line.” Staff Report, para. 24. According to the Staff Report, this “innovation” is among the seven most disruptive offerings from T-Mobile since 2007. Seriously? Is this impressive to anyone out there? Even assuming T-Mobile was the first to allow wireless users to adjust their settings online, how in the world did that constrain AT&T’s ability to set prices? The other innovations cited in the Staff Report are equally unimpressive. How well did that Unlimited Hotspot Calling or T-Mobile Hotspot @Home work out? If none of the major carriers embrace an offering like those, can we safely infer that they weren’t so innovative? If you want to make free Wi-Fi calls on your phone, download Viber. Yawn.
  7. “[O]ur analysis of the record reflects that T-Mobile charges lower prices than the other nationwide firms.” Staff Report, para. 25. Apparently, the staff doesn’t want you to know that T-Mobile had its legs cut out by regional carriers such as Leap and MetroPCS. Indeed, T-Mobile’s executives have admitted as much publicly, explaining how it was caught between the high-end service of AT&T and Verizon and the low-end, no-frills service of Leap and MetroPCS. And no firm wants to be caught in the middle of the road. Speaking of being caught, the staff should not offer such misleading statistics. To make things concrete, in Washington D.C., T-Mobile offers a $39.99 per month plan that includes 500 minutes of voice and no text messages. In comparison, Leap offers a $35 per month plan that has unlimited voice minutes and includes text messages. But the Staff Report wouldn’t count Leap’s offering because Leap is not a “national” carrier, despite Leap’s offering wireless service in 35 states covering 100 million people.
  8. “T-Mobile expressed interest [in selling wholesale access to Cablevision], had previously exhibited a willingness to sell wholesale mobile wireless capacity, and, in Cablevision’s view, was likely to continue to have excess capacity it could use to serve Cablevision’s customers in the future. Although the outcome of any negotiation is uncertain, a deal between Cablevision and T-Mobile appeared to be beneficial to both parties.” Staff Report, para. 28. The staff here wants us to believe that in addition to the proposed merger undercutting T-Mobile’s initiatives to revamp the firm, the proposed transaction would undercut a prospective deal with Cablevision that would ostensibly bring benefits to Cablevision’s customers in parts of New York, New Jersey, and Connecticut. Well now this all makes sense: Stop a merger that could generate benefits to AT&T’s and T-Mobile’s nationwide customers to preserve Cablevision’s option to offer a quad-play to its customers in three states. Kudos to the cable lobbyists for getting their client’s concerns front and center in the FCC’s merger analysis. Setting aside the uncertainty surrounding the actual wholesale discussion that Cablevision and T-Mobile may or may not have entertained, the Staff Report suggests incorrectly that Cablevision depends uniquely on T-Mobile for spectrum, and that Cablevision’s customers would benefit significantly from having a sixth or seventh wireless option. As further evidence of how out of touch the Staff Report is from market realities, Verizon just announced that it was purchasing all of the AWS spectrum held by several cable companies, a market reality inconsistent with the staff’s views that T-Mobile’s innovative future lay in partnerships with cable companies.
  9. “Combined, these five regional providers accounted for approximately six percent of the industry’s total subscribers and revenues at the end of 2010. None of these providers’ networks cover more than 34 percent of the U.S. population, and for most their more advanced broadband networks are smaller.” Staff Report, para. 38. Because these regional providers do not have the potential to serve 100 percent of the U.S. population, it makes no sense to denominate their size in terms of nationwide subscribers. Doing so necessarily understates their importance in the local markets they serve. By way of analogy, Comcast’s in-region share of video subscribers or “video penetration” is roughly 44 percent, whereas its share of nationwide video subscribers is roughly 25 percent. Of course, the latter statistic bears no relation to Comcast’s pricing power. Moreover, while Leap or MetroPCS alone do not cover a majority of the nation, their roaming agreement (and complementary footprints) allows each firm to provide nationwide coverage. Again, the Staff Report appears to be playing fast and loose with the data.
  10. “AT&T’s unilateral incentive to raise price in this case arises because providers sell differentiated products, and many of AT&T’s customers view T-Mobile as their second choice at current prices. . . . Local number porting data (data on where customers go when they switch wireless providers while keeping their phone number) indicate that each of them [the major carriers] has customers who view T-Mobile products as their second choice.” Staff Report, para. 50. What does “many” mean in this context? And what does it mean to have at least some customers who switched to T-Mobile? Could “many” AT&T customers mean five percent? Any share less than T-Mobile’s probability-adjusted market share of roughly 16 percent (equal to T-Mobile’s share divided by 100 less AT&T’s share) would not be evidence of significant cross-price elasticity between AT&T and Sprint. The Staff Report later defines “many” as “a non-trivial fraction of AT&T’s customers.” But why is the standard so low? Later the Staff Report claims that “a substantial fraction” of AT&T customers switched to T-Mobile, and did so “in response to changes in the relative price of T-Mobile products and the introduction of new T-Mobile products.” Setting aside its loose standard (from “many” to “non-trivial” to “substantial”) of economic significance, porting data cannot tell you why a customer switched from one carrier to another. To assess cross-price elasticity, one must estimate an econometric model using customer-level wireless bills, which the Staff Report does not do. Finally, to bolster its evidence of cross-price elasticity, the Staff Report cites a T-Mobile “Losing Your Shirt” advertising campaign targeting AT&T’s customers. That T-Mobile aspired to attract AT&T customers does not constitute evidence that T-Mobile actually disciplines AT&T’s prices. Many computer companies aspire to topple Apple, but that doesn’t make it so.

As you digest these criticisms, think of how an economic expert could defend these statements upon cross examination. Although the authors of the report will never be subjected to such an exam, it is a bit surprising that such bald and unsupported statements could survive the cutting-room floor.

I just returned from a long weekend in the Caribbean, attempting to recreate the scenery of (and a few scenes from) The Bachelor. Given the ubiquity of Wi-Fi coverage, I was able to stay connected with my favorite newspapers and magazines: iPhone in one hand, Mojito in the other. Just as I was feeling like a one-percenter, I stumbled upon a story about Newt Gingrich’s propensity to use private air travel. According to the Post, “for at least two years [Gingrich] insisted upon flying private charter jets everywhere he traveled, with most of the costs—ranging from $30,000 to $45,000 per trip—billed to [his company] American Solutions.”

The hefty price tag for private air travel is in line with a quote I recently obtained from a representative of a major jet leasing operation, which allows customers to purchase 25 hours of private air travel at $166,000 on a small Cessna jet that comfortably seats five. (Use it in 18 months or lose it.) This luxury amounts to $6,640 per hour of flight time or $33,200 for a five-hour round trip, smack in the middle of what Gingrich pays. How rich—or what strange preferences—would induce someone like Gingrich to pay that much for such a privilege? (Perhaps the same preferences that would induce you to run up a $500,000 tab at Tiffany.)

This question can be answered with the economist’s tool of marginal analysis. Let’s compare the price of private air travel to the closest (inferior) substitute: first-class air travel. To fly a family of five on a first-class journey lasting 2.5 hours in each direction, one must shell out about $7,000. (American Airlines charges $1,400 for the 2.5 hour flight in first class from DC to Miami, and United charges slightly less for a two-hour trip from DC to Chicago.) Thus, the premium for private air travel over first class is roughly $26,200 per trip (equal to $33,200 less $7,000).

To be fair, the $26,200 premium for private air travel is not a total loss. Relative to first-class travel, private air travel allows you to avoid airport baggage lines, avoid travelling with “commoners,” paparazzi, and would-be terrorists, achieve a certain elite status among your friends, and choose your own routes and flying times. Rock stars might value the option of doing drugs or other things 30,000 feet in the air. (I intentionally omitted “avoid airport security lines” because first-class passengers skip to the front of the security lines.) Using the logic of revealed preference, we can safely infer that any customer (including Gingrich) who opts for private air travel must value these enhancements by at least $26,200—else he would fly first-class.

Having consulted my preferences and budget constraints, I have decided that I am not plane rich. (Given my wife’s reaction to the mere suggestion of our flying first-class, I wouldn’t even mention private air travel within her striking distance.) For any normal set of preferences, at what income level does the $26,200 premium for private air travel make sense? The target audience seems to be top actors, rock stars, and small- and mid-cap company executives. Large-cap company executives presumably have access to company-owned jets. Are there enough of these folks out there to support this niche industry? Warren Buffett seems to think so.

My apologies to TOTM readers for taking last week off. A firm retreat in Phoenix followed by a hearing in Oklahoma City really puts a crimp on one’s fun time. In the meantime, the BCS announced that it is considering eliminating the automatic-qualification offers to BCS conference champions. The ACC and Big East must not be pleased. Proof that what gets written on this blog has a significant (and positive) impact on the world around us.

Joking aside, in Washington this week, the Supercommittee designed to solve the nation’s budget crisis is dominating the headlines. One wonders whether Washington Post writers who follow economic affairs coordinate their opinions. Within a day of the Supercommittee’s announced failure, at least three prominent columnists have reached the identical opinion regarding who is to blame for the Supercommittee’s failure: President Obama. Today, Michael Gerson writes “The supercommittee failed primarily because President Obama gave a shrug.” In another column, Ezra Klein writes “There’s not much we can do, they [the Obama administration] say, in a world where congressional Republicans won’t agree to a reasonable deal. In most cases, that’s true. In this case, it’s really not.” Klein questions why Obama never embraced the Bipartisan Fiscal Commission report (aka the “Bowles-Simpson report”). Finally, in yesterday’s Post, Robert Samuelson writes “The reason we cannot have a large budget deal is that Americans haven’t been prepared for one. The president hasn’t educated them, and so they can’t support what they don’t understand.” Samuelson explains that if we don’t address these entitlement programs, their costs will nearly double as a share of national income, which will displace spending in other areas or necessitate further tax increases or both.

If these opinions flowed exclusively from right-of-center columnists, then they could be discounted as political posturing. While Gerson was the lead speech writer for George W. Bush, Klein and Samuelson are hardly batting from the right. Will a “consensus” emerge among the center-left that Obama is to blame for the budget crisis, and will it propel Obama to confront the entitlement morass? Or do the political benefits of shirking the entitlement debate outweigh the costs? The lasting power of entitlements stems from the self-reinforcing dependency among the beneficiaries (who come to depend on the program) and the members of the political party protecting the program (who come to depend on the built-in constituency for votes). It would require tremendous leadership and courage for Obama to transcend politics as usual, and to save us from a Greek-like financial calamity. If he is not up for this task, look for the Republican presidential candidates to make Obama’s leadership issue number one in the 2012 election.

P.S. It’s probably best not to bring up budget deficits or Greek-like crises during the Thanksgiving meal. Better for your family to digest the food thoroughly before falling asleep on the couch. When in doubt, talk sports. Here’s a good conversation starter: When was the last time we cared about the Detroit Lions this late into the season?