Archives For Mobile

What should a government do when it owns geese that lay golden eggs? Should it sell the geese to fund government programs? Or should it let them run wild so everyone can have a chance at a golden egg? 

That’s the question facing Congress as it considers re-authorizing the Federal Communications Commission’s (FCC’s) authority to auction and license spectrum. Should the FCC auction spectrum to maximize government revenue? Or, should it allow large portions to remain unlicensed to foster innovation and development?

The complication in this regard is that auction revenues play an outsized role in federal lawmakers’ deliberations about spectrum policy. Indeed, spectrum auctions have been wildly successful in generating revenue for the federal government. But the size of direct federal revenues are not necessarily a perfect gauge of the overall social welfare generated by particular policy choices.

As it considers future spe​​ctrum reauthorization, Congress needs to take a balanced approach that includes concern for federal revenues, but also considers the much larger social welfare that is created when diverse users in various situations can access services enabled by both licensed and unlicensed spectrum.

Licenced, Unlicensed, & Shared Spectrum

Most spectrum is licensed by the FCC to certain users. Licensees pay fees to the FCC for the exclusive right to transmit on an assigned frequency within a given geographical area. A license holder has the right to exclude others from accessing the assigned frequency and to be free from harmful interference from other service providers. In the private sector, radio and television broadcasters, as well as mobile-phone services, operate with licensed spectrum. Their right to exclude others and to be free from interference provides improved service and greater reliability in distributing their broadcasts or providing communication services.

SOURCE: U.S. Commerce Department

Licensing gets spectrum into the hands of those who are well-positioned—both technologically and financially—to deploy spectrum for commercial uses. Because a licensee has the right to exclude other operators from the licensed band, licensing offers the operator flexibility to deploy their network in ways that effectively mitigate potential interference. In addition, the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending. Spectrum auctions have reportedly raised more than $230 billion for the U.S. Treasury since their inception.

Unlicensed spectrum can be seen as an open-access resource available to all users without charge. Users are free to use as much of this spectrum as they wish, so long as it’s with FCC-certified equipment operating at authorized power levels. The most well-known example of unlicensed operations is Wi-Fi, a service that operates in the 2.4 GHz, and 5.8 GHz bands and is employed by millions of U.S. users across millions of devices in millions of locations each day. Wi-Fi isn’t the only use for unlicensed spectrum; it covers a range of devices such as those relying on Bluetooth, as well as personal medical devices, appliances, and a wide range of Internet-of-Things devices. 

As with any common resource, each user’s service-quality experience depends on how much spectrum is used by all. In particular, if the demand for spectrum at a particular place and point in time exceeds the available supply, then all users will experience diminished service quality. If you’ve been in a crowded coffee shop and complained that “the Internet sucks here,” it’s more than likely that demand for the shop’s Wi-Fi service is greater than the capacity of the Wi-Fi router.

SOURCE: Wall Street Journal

While there can be issues of interference among wireless devices, it’s not the Wild West. Equipment and software manufacturers have invested in developing technologies that work in noisy environments and in proximity to other products. The existence of sufficient unlicensed and shared spectrum allows for innovation with new technologies and services. Firms don’t have to make large upfront investments in licenses to research, develop, and experiment with their innovations. These innovations benefit consumers, businesses, and manufacturers. According to the Wi-Fi Alliance, the success of Wi-Fi has been enormous:

The United States remains one of the countries with the widest Wi-Fi adoption and use. Cisco estimates 33.5 million paid Wi-Fi access points, with estimates for free public Wi-Fi sites at around 18.6 million. Eighty-five percent of United States broadband subscribers have Wi-Fi capability at home, and mobile users connect to the internet through Wi-Fi over cellular networks more than 55 percent of the time. The United States also has a robust manufacturing ecosystem and increasing enterprise use, which have aided the rise in the value of Wi-Fi. The total economic value of Wi-Fi in 2021 is $995 billion.

The Need for Balanced Spectrum Policy

To be sure, both licensed and unlicensed spectrum play crucial roles and serve different purposes, sometimes as substitutes for one another and sometimes as complements. It can’t therefore be said that one approach is “better” than the other, as there is undeniable economic value to both.

That’s why it’s been said that the optimal amount of unlicensed spectrum is somewhere between 0% and 100%. While that’s true, it’s unhelpful as a guide for policymakers, even if it highlights the challenges they face. Not only must they balance the competing interests of consumers, wireless providers, and electronics manufacturers, but they also have to keep their own self-interest in check, insofar as they are forever tempted to use spectrum auctions to raise revenue.

To this last point, it is likely that the “optimum” amount of unlicensed spectrum for society differs significantly from the amount that maximizes government auction revenues.

For simplicity, let’s assume “consumer welfare” is a shorthand for social welfare less government-auction revenues. In the (purely hypothetical) figure below, consumer welfare is maximized when about 56% of the available spectrum is licensed. Government auction revenues, however, are maximized when all available spectrum is licensed.

SOURCE: Authors

In this example, politicians have a keen interest in licensing more spectrum than is socially optimal. Doing so provides more revenues to the government without raising taxes. The additional costs passed on to individual consumers (or voters) would be so disperse as to be virtually undetectable. It’s a textbook case of concentrated benefits and diffused costs.

Of course, we can debate about the size, shape, and position of each of the curves, as well as where on the curve the United States currently sits. Nevertheless, available evidence indicates that the consumer welfare generated through use of unlicensed broadband will often exceed the revenue generated by spectrum auctions. For example, if the Wi-Fi Alliance’s estimate of $995 billion in economic value for Wi-Fi is accurate (or even in the ballpark), then the value of Wi-Fi alone is more than three times greater than the auction revenues received by the U.S. Treasury.

Of course, licensed-spectrum technology also provides tremendous benefit to society, but the basic basic point cannot be ignored: a congressional calculation that seeks simply to maximize revenue to the U.S. Treasury will almost certainly rob society of a great deal of benefit.

Conclusion

Licensed spectrum is obviously critical, and not just because it allows politicians to raise revenue for the federal government. Cellular technology and other licensed applications are becoming even more important as a wide variety of users opt for cellular-only Internet connections, or where fixed wireless over licensed spectrum is needed to reach remote users.

At the same time, shared and unlicensed spectrum has been a major success story, and promises to keep delivering innovation and greater connectivity in a wide variety of use cases.  As we note above, the federal revenue generated from auctions should not be the only benefit counted. Unlicensed spectrum is responsible for tens of billions of dollars in direct value, and close to $1 trillion when accounting for its indirect benefits.

Ultimately, allocating spectrum needs to be a question of what most enhances consumer welfare. Raising federal revenue is great, but it is only one benefit that must be counted among a number of benefits (and costs). Any simplistic formula that pushes for maximizing a single dimension of welfare is likely to be less than ideal. As Congress considers further spectrum reauthorization, it needs to take seriously the need to encourage both private ownership of licensed spectrum, as well as innovative uses of unlicensed and shared spectrum.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas W. Hazlett is the H.H. Macaulay Endowed Professor of Economics at Clemson University.]

Disclosure: The one time I met Ajit Pai was when he presented a comment on my book, “The Political Spectrum,” at a Cato Institute forum in 2018. He was gracious, thorough, and complimentary. He said that while he had enjoyed the volume, he hoped not to appear in upcoming editions. I took that to imply that he read the book as harshly critical of the Federal Communications Commission. Well, when merited, I concede. But it left me to wonder if he had followed my story to its end, as I document the success of reforms launched in recent decades and advocate their extension. Inclusion in a future edition might work out well for a chairman’s legacy. Or…

While my comment here focuses on radio-spectrum allocation, there was a notable reform achieved during the Pai FCC that touches on the subject, even if far more general in scope. In January 2018, the commission voted to initiate an Office of Economics and Analytics.[1] The organizational change was expeditiously instituted that same year, with the new unit stood up under the leadership of FCC economist Giulia McHenry.[2]  I long proposed an FCC “Office of Economic Analysis” on the grounds that it had a reasonable prospect of improving evidence-based policymaking, allowing cost-benefit calculations to be made in a more professional, independent, and less political context.[3]  I welcome this initiative by the Pai FCC and look forward to the empirical test now underway.[4] 

Big Picture

Spectrum policy had notable triumphs under Chairman Pai but was—as President Carter dubbed the Vietnam War—an “incomplete success.” The main cause for celebration was the campaign to push spectrum-access rights into the marketplace. Pai’s public position was straightforward: “Our spectrum strategy calls for making low-band, mid-band, and high-band airwaves available for flexible use,” he wrote in an FCC blog post on June 19, 2018. But the means used by regulators to pursue that policy agenda repeatedly, historically prove determinative. The Pai FCC traveled pathways both effective and ineffective, and we should learn from either. The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models. The traditional spectrum-allocation approach is to permit exactly what the FCC finds to be the best use of spectrum, but this assumes knowledge about the value of alternatives the regulator does not possess. Moreover, it assumes away the costs of regulators imposing their solutions over and above a competitive process that might have less direction but more freedom. In a 2017 notice, the FCC displayed the progress we have made in departing from administrative control, when it sought guidance from private sector commenters this way:

“Are there opportunities to incentivize relocation or repacking of incumbent licensees to make spectrum available for flexible broadband use?

We seek comment on whether auctions … could be used to increase the availability of flexible use spectrum?”

By focusing on how rights—not markets—should be structured, the FCC may side-step useless food fights and let social progress flow.[5]

Progress

Spectrum-allocation results were realized. Indeed, when one looks at the pattern in licensed and unlicensed allocations for “flexible use” under 10 GHz, the recent four-year interval coincides with generous increases, both absolutely and from trend. See Figure 1. These data feature expansions in bandwidth via liberal licenses that include 70 MHz for CBRS (3.5 GHz band), with rights assigned in Auction 105 (2020), and 280 MHz (3.7 – 3.98 GHz) assigned in Auction 107 (2020-21, soon to conclude). The 70 MHz added via Auction 1002 (600 MHz) in 2017 was accounted for during the previous FCC, but substantial bandwidth in Auctions 101, 102, and 103 was added in the millimeter wave bands (not shown in Figure 1, which focuses on low- and mid-band rights).[6]  Meanwhile, multiple increments of unlicensed spectrum allocations were made in 2020: 30 MHz shifted from the Intelligent Transportation Services set-aside (5.9 GHz) in 2020, 80 MHz of CBRS in 2020, and 1,200 MHz (6 GHz) dedicated to Wi-Fi type services in 2020.[7]  Substantial millimeter wave frequency space was previously set aside for unlicensed operations in 2016.[8]

Source: FCC and author’s calculations.

First, that’s not the elephant in the room. Auction 107 has assigned licenses allocated 280 MHz of flexible-use mid-band spectrum, producing at least $94 billion in gross bids (of which about $13 billion will be paid to incumbent satellite licensees to reconfigure their operations so as to occupy just 200 MHz, rather than 500 MHz, of the 3.7 – 4.2 GHz band).[9]  This crushes previous FCC sales; indeed, it constitutes about 42% of all auction receipts:

  • FCC auction receipts, 1994-2019: $117 billion[10]
  • FCC auction receipts, 2020 (Auctions 103 and 105): $12.1 billion
  • FCC auction winning bids, 2020 (Auction 107): $94 billion (gross bids including relocation costs, incentive payments, and before Assignment Phase payments)

The addition of the 280 MHz to existing flexible-use spectrum suitable for mobile (aka, Commercial Mobile Radio Services – CMRS) is the largest increment ever released. It will compose about one-fourth of the low- and mid-band frequencies available via liberal licenses. This constitutes a huge advance with respect to 5G deployments, but going much further—promoting competition, innovation in apps and devices, the Internet of Things, and pushing the technological envelope toward 6G and beyond. Notably, the U.S. has uniquely led this foray to a new frontier in spectrum allocation.

The FCC deserves praise for pushing this proceeding to fruition. So, here it is. The C-Band is a very big deal and a major policy success. And more: in Auction 107, the commission very wisely sold overlay rights. It did not wait for administrative procedures to reconfigure wireless use, tightly supervising new “sharing” of the band, but (a) accepted the incumbents’ basic strategy for reallocation, (b) sold new prospective rights to high bidders, subject to protection of incumbents, (c) used a fraction of proceeds to fund incumbents cooperating with the reallocation, plussing-up payments when hitting deadlines, and (d) implicitly relied on the new licensees to push the relocation process forward.

Challenges

It is interesting that the FCC sort of articulated this useful model, and sort of did not:

For a successful public auction of overlay licenses in the 3.7-3.98 GHz band, bidders need to know before an auction commences when they will get access to that currently occupied spectrum as well as the costs they will incur as a condition of their overlay license. (FCC C-Band Order [Feb. 7, 2020], par. 110)

A germ of truth, but note: Auction 107 also demonstrated just the reverse. Rights were sold prior to clearing the airwaves and bidders—while liable for “incentive payments”—do not know with certainty when the frequencies will be available for their use. Risk is embedded, as it is widely in financial assets (corporate equity shares are efficiently traded despite wide disagreement on future earnings), and yet markets perform. Indeed, the “certainty” approach touted by the FCC in their language about a “successful public auction” has long deterred efficient reallocations, as the incumbents’ exiting process holds up arrival of the entrants. The central feature of the C-Band reallocation was not to create certainty, but to embed an overlay approach into the process. This draws incumbents and entrants together into positive-sum transactions (mediated by the FCC are party-to-party) where they cooperate to create new productive opportunities, sharing the gains.  

The inspiration for the C-Band reallocation of satellite spectrum was bottom-up. As with so much of the radio spectrum, the band devoted to satellite distribution of video (relays to and from an array of broadcast and cable TV systems and networks) was old and tired. For decades, applications and systems were locked in by law. They consumed lots of bandwidth while ignoring the emergence of newer technologies like fiber optics (emphasis to underscore that products launched in the 1980s are still cutting-edge challenges for 2021 Spectrum Policy). Spying this mismatch, and seeking gains from trade, creative risk-takers petitioned the FCC.

In a mid-2017 request, computer chipmaker Intel and C-Band satellite carrier Intelsat (no corporate relationship) joined forces to ask for permission to expand the scope of satellite licenses. The proffered plan was for license holders to invest in spectrum economies by upgrading satellites and earth stations—magically creating new, unoccupied channels in prime mid-band frequencies perfect for highly valuable 5G services. All existing video transport services would continue, while society would enjoy way more advanced wireless broadband. All regulators had to do was allow “change of use” in existing licenses. Markets would do the rest: satellite operators would make efficient multi-billion-dollar investments, coordinating with each other and their customers, and then take bids from new users itching to access the prime 4 GHz spectrum. The transition to bold, new, more valuable applications would compensate legacy customers and service providers.

This “spectrum sharing” can spin gold – seizing on capitalist discovery and demand revelation in market bargains. Voila, the 21st century, delivered.

Well, yes and no. At first, the FCC filing was a yawner, the standard bureaucratic response. But this one took off took off when Chairman Pai—alertly, and in the public interest—embraced the proposal, putting it on the July 12, 2018 FCC meeting agenda. Intelsat’s market cap jumped from about $500 million to over $4.5 billion—the value of the spectrum it was using was worth far more than the service it was providing, and the prospect that it might realize some substantial fraction of the resource revaluation was visible evidence.[11] 

While the Pai FCC leaned in the proper policy direction, politics soon blew the process down. Congress denounced the “private auction” as a “windfall,” bellowing against the unfairness of allowing corporations (some foreign-owned!) to cash out. The populist message was upside-down. The social damage created by mismanagement of spectrum—millions of Americans paying more and getting less from wireless than otherwise, robbing ordinary citizens of vast consumer surplus—was being fixed by entrepreneurial initiative. Moreover, the public gains (lower prices plus innovation externalities spun off from liberated bandwidth) was undoubtedly far greater than any rents captured by the incumbent licensees. And a great bonus to spur future progress: rewards for those parties initiating and securing efficiency-enhancing rights will unleash vastly more productive activity.

But the populist winds—gale force and bipartisan—spun the FCC.

It was legally correct that Intelsat and its rival satellite carriers did not own the spectrum allocated to the C-Band. Indeed, that was root of the problem. And here’s a fatal catch: in applying for broader spectrum property rights, they revealed a valuable discovery. The FCC, posing as referee, turned competitor and appropriated the proffered business plan on behalf of its client (the U.S. government), and then auctioned it to bidders. Regulators did tip the incumbents, whose help was still needed in reorganizing the C-Band, setting $3.3 billion as a fair price for “moving costs” (changing out technology to reduce their transmission footprints) and dangled another $9.7 billion in “incentive payments” not to dilly dally. In total, carriers have bid some $93.9 billion, or $1.02 per MHz-Pop.[12] This is 4.7 times the price paid for the Priority Access Licenses (PALs) allocated 70 MHz in Auction 105 earlier in 2020.

The TOTM assignment was not to evaluate Ajit Pai but to evaluate the Pai FCC and its spectrum policies. On that scale, great value was delivered by the Intel-Intelsat proposal, and the FCC’s alert endorsement, offset in some measure by the long-term losses that will likely flow from the dirigiste retreat to fossilized spectrum rights controlled by diktat.

Sharing Nicely

And that takes us to 2020’s Auction 105 (Citizens Broadband Radio Services, CBRS). The U.S. has lagged much of the world in allocating flexible-use spectrum rights in the 3.5 GHz band. Ireland auctioned rights to use 350 MHz in May 2017 and many countries did likewise between then and 2020, distributing far more than the 70 MHz allocated to the Priority Access Licenses (PALs); 150 MHz to 390 MHz is the range. The Pai FCC can plausibly assign the lag to “preexisting conditions.” Here, however, I will stress that the Pai FCC did not substantially further our understanding of the costs of “spectrum sharing” under coordinating devices imposed by the FCC.

All commercially valuable spectrum bands are shared. The most intensely shared, in the relevant economic sense, are those bands curated by mobile carriers. These frequencies are complemented by extensive network capital supplied by investors, and permit millions of users—including international roamers—to gain seamless connectivity. Unlicensed bands, alternatively, tend to separate users spatially, powering down devices to localize footprints. These limits work better in situations where users desire short transmissions, like a Bluetooth link from iPhone to headphone or when bits can be handed off to a wide area network by hopping 60 feet to a local “hot spot.” The application of “spectrum sharing” to imply a non-exclusive (or unlicensed) rights regime is, at best, highly misleading. Whenever conditions of scarcity exist, meaning that not all uses can be accommodated without conflict, some rationing follows. It is commonly done by price, behavioral restriction, or both.

In CBRS, the FCC has imposed three layers of “priority” access across the 3550-3700 MHz band. Certain government radars are assumed to be fixed and must be protected. When in use, these systems demand other wireless services stay silent on particular channels. Next in line are PAL owners, parties which have paid for exclusivity, but which are not guaranteed access to a given channel. These rights, which sold for about $4.5 billion, are allocated dynamically by a controller (a Spectrum Access System, or SAS). The radios and networks used automatically and continuously check in to obtain spectrum space permissions. Seven PALs, allocated 10 MHz each, have been assigned, 70 MHz in total. Finally, General Access Authorizations (GAA) are given without limit or exclusivity to radio devices across the 80 MHz remaining in the band plus any PALs not in use. Some 5G phones are already equipped to use such bands on an unlicensed basis.

We shall see how the U.S. system works in comparison to alternatives. What is important to note is that the particular form of “spectrum sharing” is neither necessary nor free. As is standard outside the U.S., exclusive rights analogous to CMRS licenses could have been auctioned here, with U.S. government radars given vested rights.

One point that is routinely missed is that the decision to have the U.S. government partition the rights in three layers immediately conceded that U.S. government priority applications (for radar) would never shift. That is asserted as though it is a proposition that needs no justification, but it is precisely the sort of impediment to efficiency that has plagued spectrum reallocations for decades. It was, for instance, the 2002 assumption behind TV “white spaces”—that 402 MHz of TV Band frequencies was fixed in place, that the unused channels could never be repackaged and sold as exclusive rights and diverted to higher-valued uses. That unexamined assertion was boldly run then, as seen in the reduction of the band from 402 MHz to 235 MHz following Auctions 73 (2008) and 1001/1002 (2016-17), as well as in the clear possibility that remaining TV broadcasts could today be entirely transferred to cable, satellite, and OTT broadband (as they have already, effectively, been). The problem in CBRS is that the rights now distributed for the 80 MHz of unlicensed, with its protections of certain priority services, does not sprinkle the proper rights into the market such that positive-sum transitions can be negotiated. We’re stuck with whatever inefficiencies this “preexisting condition” of the 3.5 GHz might endow, unless another decadelong FCC spectrum allocation can move things forward.[13]

Already visible is that the rights sold as PALs in CBRS are only about 20% of the value of rights sold in the C-Band. This differential reflects the power restrictions and overhead costs embedded in the FCC’s sharing rules for CBRS (involving dynamic allocation of the exclusive access rights conveyed in PALs) but avoided in C-Band. In the latter, the sharing arrangements are delegated to the licensees. Their owners reveal that they see these rights as more productive, with opportunities to host more services.

There should be greater recognition of the relevant trade-offs in imposing coexistence rules. Yet, the Pai FCC succumbed in 5.9 GHz and in the 6 GHz bands to the tried-and-true options of Regulation Past. This was hugely ironic in the former, where the FCC had in 1999 imposed unlicensed access under rules that favored specific automotive informatics—Dedicated Short-Range Communications (DSRC)—that proved a 20-year bust. In diagnosing this policy blunder, the FCC then repeated it, splitting off a 45 MHz band with Wi-Fi-friendly unlicensed rules, and leaving 30 MHz to continue as the 1999 set-aside for DSRC. A liberalization of rights that would have allowed for a “private auction” to change the use of the band would have been the preferred approach. Instead, we are left with a partition of the band into rival rule regimes again established by administrative fiat.

This approach was then again imposed in the large 1.2 GHz unlicensed allocation surrounding 6 GHz, making a big 2020 splash. The FCC here assumed, categorically, that unlicensed rules are the best way to sponsor spectrum coordination. It ignores the costs of that coordination. And the commission appears to forget the progress it has made with innovative policy solutions, pulling in market forces through “overlay” licenses. These useful devices were used, in one form or another, to reallocate spectrum in for 2G in Auction 4, AWS in Auction 66, millimeter bands in Auctions 102 and 103, the “TV Incentive Auction,” the satellite C-Band in Auction 107, and have recently appeared as star players in the January 2021 FCC plan to rationalize the complex mix of rights scattered around the 2.5 GHz band.[14]  Too complicated for administrators to figure out, it could be transactionally more efficient to let market competitors figure this out.

The Future

The re-allocations in 5.9 GHz and the 6 GHz bands may yet host productive services. One can hope. But how will regulators know that the options allowed, and taken, are superior to what alternatives—suppressed by law for the next five, 10, 20 years—might have emerged had competitors had the right to test business models or technologies disfavored by the regulators best laid plans. That is the thinking that locked in the TV band, the C-Band for Satellites, and the ITS Band. It’s what we learned to be problematic throughout the Political Radio Spectrum. We shall see, as Chairman Pai speculated, what future chapters these decisions leave for future editions.


[1]   https://www.fcc.gov/document/fcc-votes-establish-office-economics-analytics-0

[2]   https://www.fcc.gov/document/fcc-opens-office-economics-and-analytics

[3]   Thomas Hazlett, Economic Analysis at the Federal Communications Commission: A Simple Proposal to Atone for Past Sins, Resources for the Future Discussion Paper 11-23(May 2011);David Honig, FCC Reorganization: How Replacing Silos with Functional Organization Would Advance Civil Rights, 3 University of Pennsylvania Journal of Law and Public Affairs 18 (Aug. 2018). 

[4] It is with great sadness that Jerry Ellig, the 2017-18 FCC Chief Economist who might well offer the most careful analysis of such a structural reform, will not be available for the task – one which he had already begun, writing this recent essay with two other FCC Chief Economists: Babette Boliek, Jerry Ellig and Jeff Prince, Improved economic analysis should be lasting part of Pai’s FCC legacy, The Hill (Dec. 29, 2020).  Jerry’s sudden passing, on January 21, 2021, is a deep tragedy.  Our family weeps for his wonderful wife, Sandy, and his precious daughter, Kat. 

[5]  As argued in: Thomas Hazlett, “The best way for the FCC to enable a 5G future,” Reuters (Jan. 17, 2018).

[6]  In 2018-19, FCC Auctions 101 and 102 offered licenses allocated 1,550 MHz of bandwidth in the 24 GHz and 28 GHz bands, although some of the bandwidth had previously been assigned and post-auction confusion over interference with adjacent frequency uses (in 24 GHz) has impeded some deployments.  In 2020, Auction 103 allowed competitive bidding for licenses to use 37, 39, and 47 GHz frequencies, 3400 MHz in aggregate.  Net proceeds to the FCC in 101, 102 and 103 were:  $700.3 million, $2.02 billion, and $7.56 billion, respectively.

[7]   I estimate that some 70 MHz of unlicensed bandwidth, allocated for television white space devices, was reduced pursuant to the Incentive Auction in 2017.  This, however, was baked into spectrum policy prior to the Pai FCC.

[8]   Notably, 64-71 GHz was allocated for unlicensed radio operations in the Spectrum Frontiers proceeding, adjacent to the 57-64 GHz unlicensed bands.  See Use of Spectrum Bands Above 24 GHz For Mobile Radio Services, et al., Report and Order and Further Notice of Proposed Rulemaking, 31 FCC Rcd 8014 (2016), 8064-65, para. 130.

[9]   The revenues reflect bids made in the Clock phase of Auction 107.  An Assignment Phase has yet to occur as of this writing.

[10]  The 2021 FCC Budget request, p. 34: “As of December 2019, the total amount collected for broader government use and deficit reduction since 1994 exceeds $117 billion.” 

[11]   Kerrisdale Management issued a June 2018 report that tied the proceeding to a dubious source: “to the market-oriented perspective on spectrum regulation – as articulated, for instance, by the recently published book The Political Spectrum by former FCC chief economist Thomas Winslow Hazlett – [that] the original sin of the FCC was attempting to dictate from on high what licensees should or shouldn’t do with their spectrum. By locking certain bands into certain uses, with no simple mechanism for change or renegotiation, the agency guaranteed that, as soon as technological and commercial realities shifted – as they do constantly – spectrum use would become inefficient.” 

[12]   Net proceeds will be reduced to reflect bidding credits extended small businesses, but additional bids will be received in the Assignment Phase of Auction 107, still to be held. Likely totals will remain somewhere around current levels. 

[13]  The CBRS band is composed of frequencies at 3550-3700 MHz.  The top 50 MHz of that band was officially allocated in 2005 in a proceeding that started years earlier.  It was then curious that the adjacent 100 MHz was not included. 

[14] FCC Seeks Comment on Procedures for 2.5 GHz Reallocation (Jan. 13, 2021).

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Seth L. Cooper is director of policy studies and a senior fellow at the Free State Foundation.]

During Chairman Ajit Pai’s tenure, the Federal Communications Commission adopted key reforms that improved the agency’s processes. No less important than process reform is process integrity. The commission’s L-Band Order and the process that produced it will be the focus here. In that proceeding, Chairman Pai led a careful and deliberative process that resulted in a clearly reasoned and substantively supportable decision to put unused valuable L-Band spectrum into commercial use for wireless services.

Thanks to one of Chairman Pai’s most successful process reforms, the FCC now publicly posts draft items to be voted on three weeks in advance of the commission’s public meetings. During his chairmanship, the commission adopted reforms to help expedite the regulatory-adjudication process by specifying deadlines and facilitating written administrative law judge (ALJ) decisions rather than in-person hearings. The “Team Telecom” process also was reformed to promote faster agency determinations on matters involving foreign ownership.

Along with his process-reform achievements, Chairman Pai deserves credit for ensuring that the FCC’s proceedings were conducted in a lawful and sound manner. For example, the commission’s courtroom track record was notably better during Chairman Pai’s tenure than during the tenures of his immediate predecessors. Moreover, Chairman Pai deserves high marks for the agency process that preceded the L-Band Order – a process that was perhaps subject to more scrutiny than the process of any other proceeding during his chairmanship. The public record supports the integrity of that process, as well as the order’s merits.

In April 2020, the FCC unanimously approved an order authorizing Ligado Networks to deploy a next-generation mixed mobile-satellite network using licensed spectrum in the L-Band. This action is critical to alleviating the shortage of commercial spectrum in the United States and to ensuring our nation’s economic competitiveness. Ligado’s proposed network will provide industrial Internet-of-Things (IoT) services, and its L-Band spectrum has been identified as capable of pairing with C-Band and other mid-band spectrum for delivering future 5G services. According to the L-Band Order, Ligado plans to invest up to $800 million in network capabilities, which could create over 8,000 jobs. Economist Coleman Bazelon estimated that Ligado’s network could help create up to 3 million jobs and contribute up to $500 billion to the U.S. economy.

Opponents of the L-Band Order have claimed that Ligado’s proposed network would create signal interference with GPS services in adjacent spectrum. Moreover, in attempts to delay or undo implementation of the L-Band Order, several opponents lodged harsh but baseless attacks against the FCC’s process. Some of those process criticisms were made at a May 2020 Senate Armed Services Committee hearing that failed to include any Ligado representatives or any FCC commissioners for their viewpoints. And in a May 2020 floor speech, Sen. James Inhofe (R-Okla.) repeatedly criticized the commission’s process as sudden, hurried, and taking place “in the darkness of a weekend.”

But those process criticisms fail in the face of easily verifiable facts. Under Chairman Pai’s leadership, the FCC acted within its conceded authority, consistent with its lawful procedures, and with careful—even lengthy—deliberation.

The FCC’s proceeding concerning Ligado’s license applications dates back to 2011. It included public notice and comment periods in 2016 and 2018. An August 2019 National Telecommunications and Information Administration (NTIA) report noted the commission’s forthcoming decision. In the fall of 2019, the commission shared a draft of its order with NTIA. Publicly stated opposition to Ligado’s proposed network by GPS operators and Defense Secretary Mark Esper, as well as publicly stated support for the network by Attorney General William Barr and Secretary of State Mike Pompeo, ensured that the proceeding received ongoing attention. Claims of “surprise” when the commission finalized its order in April 2020 are impossible to credit.

Importantly, the result of the deliberative agency process helmed by Chairman Pai was a substantively supportable decision. The FCC applied its experience in adjudicating competing technical claims to make commercial spectrum policy decisions. It was persuaded in part by signal testing conducted by the National Advanced Spectrum and Communications Test Network, as well as testing by technology consultants Roberson and Associates. By contrast, the commission found unpersuasive reports of alleged signal interference involving military devices operating outside of their assigned spectrum band.

The FCC also applied its expertise in addressing potential harmful signal interference to incumbent operations in adjacent spectrum bands by imposing several conditions on Ligado’s operations. For example, the L-Band Order requires Ligado to adhere to its agreements with major GPS equipment manufacturers for resolving signal interference concerns. Ligado must dedicate 23 megahertz of its own licensed spectrum as a guard-band from neighboring spectrum and also reduce its base station power levels 99% compared to what Ligado proposed in 2015. The commission requires Ligado to expeditiously replace or repair any U.S. government GPS devices that experience harmful interference from its network. And Ligado must maintain “stop buzzer” capability to halt its network within 15 minutes of any request by the commission.

From a process standpoint, the L-Band Order is a commendable example of Chairman Pai’s perseverance in leading the FCC to a much-needed decision on an economically momentous matter in the face of conflicting government agency and market provider viewpoints. Following a careful and deliberative process, the commission persevered to make a decision that is amply supported by the record and poised to benefit America’s economic welfare.

This is the third in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here, and the second here). It draws on research from a soon-to-be published ICLE white paper.

(Comparison of Google and Apple’s smartphone business models. Red $ symbols represent money invested; Green $ symbols represent sources of revenue; Black lines show the extent of Google and Apple’s control over their respective platforms)

For the third in my series of posts about the Google Android decision, I will delve into the theories of harm identified by the Commission. 

The big picture is that the Commission’s analysis was particularly one-sided. The Commission failed to adequately account for the complex business challenges that Google faced – such as monetizing the Android platform and shielding it from fragmentation. To make matters worse, its decision rests on dubious factual conclusions and extrapolations. The result is a highly unbalanced assessment that could ultimately hamstring Google and prevent it from effectively competing with its smartphone rivals, Apple in particular.

1. Tying without foreclosure

The first theory of harm identified by the Commission concerned the tying of Google’s Search app with the Google Play app, and of Google’s Chrome app with both the Google Play and Google Search apps.

Oversimplifying, Google required its OEMs to choose between either pre-installing a bundle of Google applications, or forgoing some of the most important ones (notably Google Play). The Commission argued that this gave Google a competitive advantage that rivals could not emulate (even though Google’s terms did not preclude OEMs from simultaneously pre-installing rival web browsers and search apps). 

To support this conclusion, the Commission notably asserted that no alternative distribution channel would enable rivals to offset the competitive advantage that Google obtained from tying. This finding is, at best, dubious. 

For a start, the Commission claimed that user downloads were not a viable alternative distribution channel, even though roughly 250 million apps are downloaded on Google’s Play store every day.

The Commission sought to overcome this inconvenient statistic by arguing that Android users were unlikely to download apps that duplicated the functionalities of a pre-installed app – why download a new browser if there is already one on the user’s phone?

But this reasoning is far from watertight. For instance, the 17th most-downloaded Android app, the “Super-Bright Led Flashlight” (with more than 587million downloads), mostly replicates a feature that is pre-installed on all Android devices. Moreover, the five most downloaded Android apps (Facebook, Facebook Messenger, Whatsapp, Instagram and Skype) provide functionalities that are, to some extent at least, offered by apps that have, at some point or another, been preinstalled on many Android devices (notably Google Hangouts, Google Photos and Google+).

The Commission countered that communications apps were not appropriate counterexamples, because they benefit from network effects. But this overlooks the fact that the most successful communications and social media apps benefited from very limited network effects when they were launched, and that they succeeded despite the presence of competing pre-installed apps. Direct user downloads are thus a far more powerful vector of competition than the Commission cared to admit.

Similarly concerning is the Commission’s contention that paying OEMs or Mobile Network Operators (“MNOs”) to pre-install their search apps was not a viable alternative for Google’s rivals. Some of the reasons cited by the Commission to support this finding are particularly troubling.

For instance, the Commission claimed that high transaction costs prevented parties from concluding these pre installation deals. 

But pre-installation agreements are common in the smartphone industry. In recent years, Microsoft struck a deal with Samsung to pre-install some of its office apps on the Galaxy Note 10. It also paid Verizon to pre-install the Bing search app on a number of Samsung phones, in 2010. Likewise, a number of Russian internet companies have been in talks with Huawei to pre-install their apps on its devices. And Yahoo reached an agreement with Mozilla to make it the default search engine for its web browser. Transaction costs do not appear to  have been an obstacle in any of these cases.

The Commission also claimed that duplicating too many apps would cause storage space issues on devices. 

And yet, a back-of-the-envelope calculation suggests that storage space is unlikely to be a major issue. For instance, the Bing Search app has a download size of 24MB, whereas typical entry-level smartphones generally have an internal memory of at least 64GB (that can often be extended to more than 1TB with the addition of an SD card). The Bing Search app thus takes up less than one-thousandth of these devices’ internal storage. Granted, the Yahoo search app is slightly larger than Microsoft’s, weighing almost 100MB. But this is still insignificant compared to a modern device’s storage space.

Finally, the Commission claimed that rivals were contractually prevented from concluding exclusive pre-installation deals because Google’s own apps would also be pre-installed on devices.

However, while it is true that Google’s apps would still be present on a device, rivals could still pay for their applications to be set as default. Even Yandex – a plaintiff – recognized that this would be a valuable solution. In its own words (taken from the Commission’s decision):

Pre-installation alongside Google would be of some benefit to an alternative general search provider such as Yandex […] given the importance of default status and pre-installation on home screen, a level playing field will not be established unless there is a meaningful competition for default status instead of Google.

In short, the Commission failed to convincingly establish that Google’s contractual terms prevented as-efficient rivals from effectively distributing their applications on Android smartphones. The evidence it adduced was simply too thin to support anything close to that conclusion.

2. The threat of fragmentation

The Commission’s second theory of harm concerned the so-called “antifragmentation” agreements concluded between Google and OEMs. In a nutshell, Google only agreed to license the Google Search and Google Play apps to OEMs that sold “Android Compatible” devices (i.e. devices sold with a version of Android did not stray too far from Google’s most recent version).

According to Google, this requirement was necessary to limit the number of Android forks that were present on the market (as well as older versions of the standard Android). This, in turn, reduced development costs and prevented the Android platform from unraveling.

The Commission disagreed, arguing that Google’s anti-fragmentation provisions thwarted competition from potential Android forks (i.e. modified versions of the Android OS).

This conclusion raises at least two critical questions: The first is whether these agreements were necessary to ensure the survival and competitiveness of the Android platform, and the second is why “open” platforms should be precluded from partly replicating a feature that is essential to rival “closed” platforms, such as Apple’s iOS.

Let us start with the necessity, or not, of Google’s contractual terms. If fragmentation did indeed pose an existential threat to the Android ecosystem, and anti-fragmentation agreements averted this threat, then it is hard to make a case that they thwarted competition. The Android platform would simply not have been as viable without them.

The Commission dismissed this possibility, relying largely on statements made by Google’s rivals (many of whom likely stood to benefit from the suppression of these agreements). For instance, the Commission cited comments that it received from Yandex – one of the plaintiffs in the case:

(1166) The fact that fragmentation can bring significant benefits is also confirmed by third-party respondents to requests for information:

[…]

(2) Yandex, which stated: “Whilst the development of Android forks certainly has an impact on the fragmentation of the Android ecosystem in terms of additional development being required to adapt applications for various versions of the OS, the benefits of fragmentation outweigh the downsides…”

Ironically, the Commission relied on Yandex’s statements while, at the same time, it dismissed arguments made by Android app developers, on account that they were conflicted. In its own words:

Google attached to its Response to the Statement of Objections 36 letters from OEMs and app developers supporting Google’s views about the dangers of fragmentation […] It appears likely that the authors of the 36 letters were influenced by Google when drafting or signing those letters.

More fundamentally, the Commission’s claim that fragmentation was not a significant threat is at odds with an almost unanimous agreement among industry insiders.

For example, while it is not dispositive, a rapid search for the terms “Google Android fragmentation”, using the DuckDuckGo search engine, leads to results that cut strongly against the Commission’s conclusions. Of the ten first results, only one could remotely be construed as claiming that fragmentation was not an issue. The others paint a very different picture (below are some of the most salient excerpts):

“There’s a fairly universal perception that Android fragmentation is a barrier to a consistent user experience, a security risk, and a challenge for app developers.” (here)

“Android fragmentation, a problem with the operating system from its inception, has only become more acute an issue over time, as more users clamor for the latest and greatest software to arrive on their phones.” (here)

“Android Fragmentation a Huge Problem: Study.” (here)

“Google’s Android fragmentation fix still isn’t working at all.” (here)

“Does Google care about Android fragmentation? Not now—but it should.” (here).

“This is very frustrating to users and a major headache for Google… and a challenge for corporate IT,” Gold said, explaining that there are a large number of older, not fully compatible devices running various versions of Android.” (here)

Perhaps more importantly, one might question why Google should be treated differently than rivals that operate closed platforms, such as Apple, Microsoft and Blackberry (before the last two mostly exited the Mobile OS market). By definition, these platforms limit all potential forks (because they are based on proprietary software).

The Commission argued that Apple, Microsoft and Blackberry had opted to run “closed” platforms, which gave them the right to prevent rivals from copying their software.

While this answer has some superficial appeal, it is incomplete. Android may be an open source project, but this is not true of Google’s proprietary apps. Why should it be forced to offer them to rivals who would use them to undermine its platform? The Commission did not meaningfully consider this question.

And yet, industry insiders routinely compare the fragmentation of Apple’s iOS and Google’s Android OS, in order to gage the state of competition between both firms. For instance, one commentator noted:

[T]he gap between iOS and Android users running the latest major versions of their operating systems has never looked worse for Google.

Likewise, an article published in Forbes concluded that Google’s OEMs were slow at providing users with updates, and that this might drive users and developers away from the Android platform:

For many users the Android experience isn’t as up-to-date as Apple’s iOS. Users could buy the latest Android phone now and they may see one major OS update and nothing else. […] Apple users can be pretty sure that they’ll get at least two years of updates, although the company never states how long it intends to support devices.

However this problem, in general, makes it harder for developers and will almost certainly have some inherent security problems. Developers, for example, will need to keep pushing updates – particularly for security issues – to many different versions. This is likely a time-consuming and expensive process.

To recap, the Commission’s decision paints a world that is either black or white: either firms operate closed platforms, and they are then free to limit fragmentation as they see fit, or they create open platforms, in which case they are deemed to have accepted much higher levels of fragmentation.

This stands in stark contrast to industry coverage, which suggests that users and developers of both closed and open platforms care a great deal about fragmentation, and demand that measures be put in place to address it. If this is true, then the relative fragmentation of open and closed platforms has an important impact on their competitive performance, and the Commission was wrong to reject comparisons between Google and its closed ecosystem rivals. 

3. Google’s revenue sharing agreements

The last part of the Commission’s case centered on revenue sharing agreements between Google and its OEMs/MNOs. Google paid these parties to exclusively place its search app on the homescreen of their devices. According to the Commission, these payments reduced OEMs and MNOs’ incentives to pre-install competing general search apps.

However, to reach this conclusion, the Commission had to make the critical (and highly dubious) assumption that rivals could not match Google’s payments.

To get to that point, it notably assumed that rival search engines would be unable to increase their share of mobile search results beyond their share of desktop search results. The underlying intuition appears to be that users who freely chose Google Search on desktop (Google Search & Chrome are not set as default on desktop PCs) could not be convinced to opt for a rival search engine on mobile.

But this ignores the possibility that rivals might offer an innovative app that swayed users away from their preferred desktop search engine. 

More importantly, this reasoning cuts against the Commission’s own claim that pre-installation and default placement were critical. If most users, dismiss their device’s default search app and search engine in favor of their preferred ones, then pre-installation and default placement are largely immaterial, and Google’s revenue sharing agreements could not possibly have thwarted competition (because they did not prevent users from independently installing their preferred search app). On the other hand, if users are easily swayed by default placement, then there is no reason to believe that rivals could not exceed their desktop market share on mobile phones.

The Commission was also wrong when it claimed that rival search engines were at a disadvantage because of the structure of Google’s revenue sharing payments. OEMs and MNOs allegedly lost all of their payments from Google if they exclusively placed a rival’s search app on the home screen of a single line of handsets.

The key question is the following: could Google automatically tilt the scales to its advantage by structuring the revenue sharing payments in this way? The answer appears to be no. 

For instance, it has been argued that exclusivity may intensify competition for distribution. Conversely, other scholars have claimed that exclusivity may deter entry in network industries. Unfortunately, the Commission did not examine whether Google’s revenue sharing agreements fell within this category. 

It thus provided insufficient evidence to support its conclusion that the revenue sharing agreements reduced OEMs’ (and MNOs’) incentives to pre-install competing general search apps, rather than merely increasing competition “for the market”.

4. Conclusion

To summarize, the Commission overestimated the effect that Google’s behavior might have on its rivals. It almost entirely ignored the justifications that Google put forward and relied heavily on statements made by its rivals. The result is a one-sided decision that puts undue strain on the Android Business model, while providing few, if any, benefits in return.

A screenshot of a cell phone

Description automatically generated

This is the first in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision. It draws on research from a soon-to-be published ICLE white paper.

The European Commission’s recent Google Android decision will surely go down as one of the most important competition proceedings of the past decade. And yet, an in-depth reading of the 328 page decision should leave attentive readers with a bitter taste.

One of the Commission’s most significant findings is that the Android operating system and Apple’s iOS are not in the same relevant market, along with the related conclusion that Apple’s App Store and Google Play are also in separate markets.

This blog post points to a series of flaws that undermine the Commission’s reasoning on this point. As a result, the Commission’s claim that Google and Apple operate in separate markets is mostly unsupported.

1. Everyone but the European Commission thinks that iOS competes with Android

Surely the assertion that the two predominant smartphone ecosystems in Europe don’t compete with each other will come as a surprise to… anyone paying attention: 

A screenshot of a cell phone

Description automatically generated

Apple 10-K:

The Company believes the availability of third-party software applications and services for its products depends in part on the developers’ perception and analysis of the relative benefits of developing, maintaining and upgrading such software and services for the Company’s products compared to competitors’ platforms, such as Android for smartphones and tablets and Windows for personal computers.

Google 10-K:

We face competition from: Companies that design, manufacture, and market consumer electronics products, including businesses that have developed proprietary platforms.

This leads to a critical question: Why did the Commission choose to depart from the instinctive conclusion that Google and Apple compete vigorously against each other in the smartphone and mobile operating system market? 

As explained below, its justifications for doing so were deeply flawed.

2. It does not matter that OEMs cannot license iOS (or the App Store)

One of the main reasons why the Commission chose to exclude Apple from the relevant market is that OEMs cannot license Apple’s iOS or its App Store.

But is it really possible to infer that Google and Apple do not compete against each other because their products are not substitutes from OEMs’ point of view? 

The answer to this question is likely no.

Relevant markets, and market shares, are merely a proxy for market power (which is the appropriate baseline upon which build a competition investigation). As Louis Kaplow puts it:

[T]he entire rationale for the market definition process is to enable an inference about market power.

If there is a competitive market for Android and Apple smartphones, then it is somewhat immaterial that Google is the only firm to successfully offer a licensable mobile operating system (as opposed to Apple and Blackberry’s “closed” alternatives).

By exercising its “power” against OEMs by, for instance, degrading the quality of Android, Google would, by the same token, weaken its competitive position against Apple. Google’s competition with Apple in the smartphone market thus constrains Google’s behavior and limits its market power in Android-specific aftermarkets (on this topic, see Borenstein et al., and Klein).

This is not to say that Apple’s iOS (and App Store) is, or is not, in the same relevant market as Google Android (and Google Play). But the fact that OEMs cannot license iOS or the App Store is mostly immaterial for market  definition purposes.

 3. Google would find itself in a more “competitive” market if it decided to stop licensing the Android OS

The Commission’s reasoning also leads to illogical outcomes from a policy standpoint. 

Google could suddenly find itself in a more “competitive” market if it decided to stop licensing the Android OS and operated a closed platform (like Apple does). The direct purchasers of its products – consumers – would then be free to switch between Apple and Google’s products.

As a result, an act that has no obvious effect on actual market power — and that could have a distinctly negative effect on consumers — could nevertheless significantly alter the outcome of competition proceedings on the Commission’s theory. 

One potential consequence is that firms might decide to close their platforms (or refuse to open them in the first place) in order to avoid competition scrutiny (because maintaining a closed platform might effectively lead competition authorities to place them within a wider relevant market). This might ultimately reduce product differentiation among mobile platforms (due to the disappearance of open ecosystems) – the exact opposite of what the Commission sought to achieve with its decision.

This is, among other things, what Antonin Scalia objected to in his Eastman Kodak dissent: 

It is quite simply anomalous that a manufacturer functioning in a competitive equipment market should be exempt from the per se rule when it bundles equipment with parts and service, but not when it bundles parts with service [when the manufacturer has a high share of the “market” for its machines’ spare parts]. This vast difference in the treatment of what will ordinarily be economically similar phenomena is alone enough to call today’s decision into question.

4. Market shares are a poor proxy for market power, especially in narrowly defined markets

Finally, the problem with the Commission’s decision is not so much that it chose to exclude Apple from the relevant markets, but that it then cited the resulting market shares as evidence of Google’s alleged dominance:

(440) Google holds a dominant position in the worldwide market (excluding China) for the licensing of smart mobile OSs since 2011. This conclusion is based on: 

(1) the market shares of Google and competing developers of licensable smart mobile OSs […]

In doing so, the Commission ignored one of the critical findings of the law & economics literature on market definition and market power: Although defining a narrow relevant market may not itself be problematic, the market shares thus adduced provide little information about a firm’s actual market power. 

For instance, Richard Posner and William Landes have argued that:

If instead the market were defined narrowly, the firm’s market share would be larger but the effect on market power would be offset by the higher market elasticity of demand; when fewer substitutes are included in the market, substitution of products outside of the market is easier. […]

If all the submarket approach signifies is willingness in appropriate cases to call a narrowly defined market a relevant market for antitrust purposes, it is unobjectionable – so long as appropriately less weight is given to market shares computed in such a market.

Likewise, Louis Kaplow observes that:

In choosing between a narrower and a broader market (where, as mentioned, we are supposing that the truth lies somewhere in between), one would ask whether the inference from the larger market share in the narrower market overstates market power by more than the inference from the smaller market share in the broader market understates market power. If the lesser error lies with the former choice, then the narrower market is the relevant market; if the latter minimizes error, then the broader market is best.

The Commission failed to heed these important findings.

5. Conclusion

The upshot is that Apple should not have been automatically excluded from the relevant market. 

To be clear, the Commission did discuss this competition from Apple later in the decision. And it also asserted that its findings would hold even if Apple were included in the OS and App Store markets, because Android’s share of devices sold would have ranged from 45% to 79%, depending on the year (although this ignores other potential metrics such as the value of devices sold or Google’s share of advertising revenue

However, by gerrymandering the market definition (which European case law likely permitted it to do), the Commission ensured that Google would face an uphill battle, starting from a very high market share and thus a strong presumption of dominance. 

Moreover, that it might reach the same result by adopting a more accurate market definition is no excuse for adopting a faulty one and resting its case (and undertaking its entire analysis) on it. In fact, the Commission’s choice of a faulty market definition underpins its entire analysis, and is far from a “harmless error.” 

I shall discuss the consequences of this error in an upcoming blog post. Stay tuned.

Unexpectedly, on the day that the white copy of the upcoming repeal of the 2015 Open Internet Order was published, a mobile operator in Portugal with about 7.5 million subscribers is garnering a lot of attention. Curiously, it’s not because Portugal is a beautiful country (Iker Casillas’ Instagram feed is dope) nor because Portuguese is a beautiful romance language.

Rather it’s because old-fashioned misinformation is being peddled to perpetuate doomsday images that Portuguese ISPs have carved the Internet into pieces — and if the repeal of the 2015 Open Internet Order passes, the same butchery is coming to an AT&T store near you.

Much ado about data

This tempest in the teacup is about mobile data plans, specifically the ability of mobile subscribers to supplement their data plan (typically ranging from 200 MB to 3 GB per month) with additional 10 GB data packages containing specific bundles of apps – messaging apps, social apps, video apps, music apps, and email and cloud apps. Each additional 10 GB data package costs EUR 6.99 per month and Meo (the mobile operator) also offers its own zero rated apps. Similar plans have been offered in Portugal since at least 2012.

Screen Shot 2017-11-22 at 3.39.21 PM

These data packages are a clear win for mobile subscribers, especially pre-paid subscribers who tend to be at a lower income level than post-paid subscribers. They allow consumers to customize their plan beyond their mobile broadband subscription, enabling them to consume data in ways that are better attuned to their preferences. Without access to these data packages, consuming an additional 10 GB of data would cost each user an additional EUR 26 per month and require her to enter into a two year contract.

These discounted data packages also facilitate product differentiation among mobile operators that offer a variety of plans. Keeping with the Portugal example, Vodafone Portugal offers 20 GB of additional data for certain apps (Facebook, Instagram, SnapChat, and Skype, among others) with the purchase of a 3 GB mobile data plan. Consumers can pick which operator offers the best plan for them.

In addition, data packages like the ones in question here tend to increase the overall consumption of content, reduce users’ cost of obtaining information, and allow for consumers to experiment with new, less familiar apps. In short, they are overwhelmingly pro-consumer.

Even if Portugal actually didn’t have net neutrality rules, this would be the furthest thing from the apocalypse critics make it out to be.

Screen Shot 2017-11-22 at 6.51.36 PM

Net Neutrality in Portugal

But, contrary to activists’ misinformation, Portugal does have net neutrality rules. The EU implemented its net neutrality framework in November 2015 as a regulation, meaning that the regulation became the law of the EU when it was enacted, and national governments, including Portugal, did not need to transpose it into national legislation.

While the regulation was automatically enacted in Portugal, the regulation and the 2016 EC guidelines left the decision of whether to allow sponsored data and zero rating plans (the Regulation likely classifies data packages at issue here to be zero rated plans because they give users a lot of data for a low price) in the hands of national regulators. While Portugal is still formulating the standard it will use to evaluate sponsored data and zero rating under the EU’s framework, there is little reason to think that this common practice would be disallowed in Portugal.

On average, in fact, despite its strong net neutrality regulation, the EU appears to be softening its stance toward zero rating. This was evident in a recent EC competition policy authority (DG-Comp) study concluding that there is little reason to believe that such data practices raise concerns.

The activists’ willful misunderstanding of clearly pro-consumer data plans and purposeful mischaracterization of Portugal as not having net neutrality rules are inflammatory and deceitful. Even more puzzling for activists (but great for consumers) is their position given there is nothing in the 2015 Open Internet Order that would prevent these types of data packages from being offered in the US so long as ISPs are transparent with consumers.

Last year, Microsoft’s new CEO, Satya Nadella, seemed to break with the company’s longstanding “complain instead of compete” strategy to acknowledge that:

We’re going to innovate with a challenger mindset…. We’re not coming at this as some incumbent.

Among the first items on his agenda? Treating competing platforms like opportunities for innovation and expansion rather than obstacles to be torn down by any means possible:

We are absolutely committed to making our applications run what most people describe as cross platform…. There is no holding back of anything.

Earlier this week, at its Build Developer Conference, Microsoft announced its most significant initiative yet to bring about this reality: code built into its Windows 10 OS that will enable Android and iOS developers to port apps into the Windows ecosystem more easily.

To make this possible… Windows phones “will include an Android subsystem” meant to play nice with the Java and C++ code developers have already crafted to run on a rival’s operating system…. iOS developers can compile their Objective C code right from Microsoft’s Visual Studio, and turn it into a full-fledged Windows 10 app.

Microsoft also announced that its new browser, rebranded as “Edge,” will run Chrome and Firefox extensions, and that its Office suite would enable a range of third-party services to integrate with Office on Windows, iOS, Android and Mac.

Consumers, developers and Microsoft itself should all benefit from the increased competition that these moves are certain to facilitate.

Most obviously, more consumers may be willing to switch to phones and tablets with the Windows 10 operating system if they can continue to enjoy the apps and extensions they’ve come to rely on when using Google and Apple products. As one commenter said of the move:

I left Windows phone due to the lack of apps. I love the OS though, so if this means all my favorite apps will be on the platform I’ll jump back onto the WP bandwagon in a heartbeat.

And developers should invest more in development when they can expect additional revenue from yet another platform running their apps and extensions, with minimal additional development required.

It’s win-win-win. Except perhaps for Microsoft’s lingering regulatory strategy to hobble Google.

That strategy is built primarily on antitrust claims, most recently rooted in arguments that consumers, developers and competitors alike are harmed by Google’s conduct around Android which, it is alleged, makes it difficult for OS makers (like Cyanogen) and app developers (like Microsoft Bing) to compete.

But Microsoft’s interoperability announcements (along with a host of other rapidly evolving market characteristics) actually serve to undermine the antitrust arguments that Microsoft, through groups like FairSearch and ICOMP, has largely been responsible for pushing in the EU against Google/Android.

The reality is that, with innovations like the one Microsoft announced this week, Microsoft, Google and Apple (and Samsung, Nokia, Tizen, Cyanogen…) are competing more vigorously on several fronts. Such competition is evidence of a vibrant marketplace that is simply not in need of antitrust intervention.

The supreme irony in this is that such a move represents a (further) nail in the coffin of the supposed “applications barrier to entry” that was central to the US DOJ’s antitrust suit against Microsoft and that factors into the contemporary Android antitrust arguments against Google.

Frankly, the argument was never very convincing. Absent unjustified and anticompetitive efforts to prop up such a barrier, the “applications barrier to entry” is just a synonym for “big.” Admittedly, the DC Court of Appeals in Microsoft was careful — far more careful than the district court — to locate specific, narrow conduct beyond the mere existence of the alleged barrier that it believed amounted to anticompetitive monopoly maintenance. But central to the imposition of liability was the finding that some of Microsoft’s conduct deterred application developers from effectively accessing other platforms, without procompetitive justification.

With the implementation of initiatives like the one Microsoft has now undertaken in Windows 10, however, it appears that such concerns regarding Google and mobile app developers are unsupportable.

Of greatest significance to the current Android-related accusations against Google, the appeals court in Microsoft also reversed the district court’s finding of liability based on tying, noting in particular that:

If OS vendors without market power also sell their software bundled with a browser, the natural inference is that sale of the items as a bundle serves consumer demand and that unbundled sale would not.

Of course this is exactly what Microsoft Windows Phone (which decidedly does not have market power) does, suggesting that the bundling of mobile OS’s with proprietary apps is procompetitive.

Similarly, in reviewing the eventual consent decree in Microsoft, the appeals court upheld the conditions that allowed the integration of OS and browser code, and rejected the plaintiff’s assertion that a prohibition on such technological commingling was required by law.

The appeals court praised the district court’s recognition that an appropriate remedy “must place paramount significance upon addressing the exclusionary effect of the commingling, rather than the mere conduct which gives rise to the effect,” as well as the district court’s acknowledgement that “it is not a proper task for the Court to undertake to redesign products.”  Said the appeals court, “addressing the applications barrier to entry in a manner likely to harm consumers is not self-evidently an appropriate way to remedy an antitrust violation.”

Today, claims that the integration of Google Mobile Services (GMS) into Google’s version of the Android OS is anticompetitive are misplaced for the same reason:

But making Android competitive with its tightly controlled competitors [e.g., Apple iOS and Windows Phone] requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

In fact, some commenters have even suggested that, by effectively making the OS more “open,” Microsoft’s new Windows 10 initiative might undermine the Windows experience in exactly this fashion:

As a Windows Phone developer, I think this could easily turn into a horrible idea…. [I]t might break the whole Windows user experience Microsoft has been building in the past few years. Modern UI design is a different approach from both Android and iOS. We risk having a very unhomogenic [sic] store with lots of apps using different design patterns, and Modern UI is in my opinion, one of the strongest points of Windows Phone.

But just because Microsoft may be willing to take this risk doesn’t mean that any sensible conception of competition law and economics should require Google (or anyone else) to do so, as well.

Most significantly, Microsoft’s recent announcement is further evidence that both technological and contractual innovations can (potentially — the initiative is too new to know its effect) transform competition, undermine static market definitions and weaken theories of anticompetitive harm.

When apps and their functionality are routinely built into some OS’s or set as defaults; when mobile apps are also available for the desktop and are seamlessly integrated to permit identical functions to be performed on multiple platforms; and when new form factors like Apple MacBook Air and Microsoft Surface blur the lines between mobile and desktop, traditional, static anticompetitive theories are out the window (no pun intended).

Of course, it’s always been possible for new entrants to overcome network effects and scale impediments by a range of means. Microsoft itself has in the past offered to pay app developers to write for its mobile platform. Similarly, it offers inducements to attract users to its Bing search engine and it has devised several creative mechanisms to overcome its claimed scale inferiority in search.

A further irony (and market complication) is that now some of these apps — the ones with network effects of their own — threaten in turn to challenge the reigning mobile operating systems, exactly as Netscape was purported to threaten Microsoft’s OS (and lead to its anticompetitive conduct) back in the day. Facebook, for example, now offers not only its core social media function, but also search, messaging, video calls, mobile payments, photo editing and sharing, and other functionality that compete with many of the core functions built into mobile OS’s.

But the desire by apps like Facebook to expand their networks by being on multiple platforms, and the desire by these platforms to offer popular apps in order to attract users, ensure that Facebook is ubiquitous, even without any antitrust intervention. As Timothy Bresnahan, Joe Orsini and Pai-Ling Yin demonstrate:

(1) The distribution of app attractiveness to consumers is skewed, with a small minority of apps drawing the vast majority of consumer demand. (2) Apps which are highly demanded on one platform tend also to be highly demanded on the other platform. (3) These highly demanded apps have a strong tendency to multihome, writing for both platforms. As a result, the presence or absence of apps offers little reason for consumers to choose a platform. A consumer can choose either platform and have access to the most attractive apps.

Of course, even before Microsoft’s announcement, cross-platform app development was common, and third-party platforms like Xamarin facilitated cross-platform development. As Daniel O’Connor noted last year:

Even if one ecosystem has a majority of the market share, software developers will release versions for different operating systems if it is cheap/easy enough to do so…. As [Torsten] Körber documents [here], building mobile applications is much easier and cheaper than building PC software. Therefore, it is more common for programmers to write programs for multiple OSes…. 73 percent of apps developers design apps for at least two different mobiles OSes, while 62 percent support 3 or more.

Whether Microsoft’s interoperability efforts prove to be “perfect” or not (and some commenters are skeptical), they seem destined to at least further decrease the cost of cross-platform development, thus reducing any “application barrier to entry” that might impede Microsoft’s ability to compete with its much larger rivals.

Moreover, one of the most interesting things about the announcement is that it will enable Android and iOS apps to run not only on Windows phones, but also on Windows computers. Some 1.3 billion PCs run Windows. Forget Windows’ tiny share of mobile phone OS’s; that massive potential PC market (of which Microsoft still has 91 percent) presents an enormous ready-made market for mobile app developers that won’t be ignored.

It also points up the increasing absurdity of compartmentalizing these markets for antitrust purposes. As the relevant distinctions between mobile and desktop markets break down, the idea of Google (or any other company) “leveraging its dominance” in one market to monopolize a “neighboring” or “related” market is increasingly unsustainable. As I wrote earlier this week:

Mobile and social media have transformed search, too…. This revolution has migrated to the computer, which has itself become “app-ified.” Now there are desktop apps and browser extensions that take users directly to Google competitors such as Kayak, eBay and Amazon, or that pull and present information from these sites.

In the end, intentionally or not, Microsoft is (again) undermining its own case. And it is doing so by innovating and competing — those Schumpeterian concepts that were always destined to undermine antitrust cases in the high-tech sector.

If we’re lucky, Microsoft’s new initiatives are the leading edge of a sea change for Microsoft — a different and welcome mindset built on competing in the marketplace rather than at regulators’ doors.