What should a government do when it owns geese that lay golden eggs? Should it sell the geese to fund government programs? Or should it let them run wild so everyone can have a chance at a golden egg?
That’s the question facing Congress as it considers re-authorizing the Federal Communications Commission’s (FCC’s) authority to auction and license spectrum. Should the FCC auction spectrum to maximize government revenue? Or, should it allow large portions to remain unlicensed to foster innovation and development?
The complication in this regard is that auction revenues play an outsized role in federal lawmakers’ deliberations about spectrum policy. Indeed, spectrum auctions have been wildly successful in generating revenue for the federal government. But the size of direct federal revenues are not necessarily a perfect gauge of the overall social welfare generated by particular policy choices.
As it considers future spectrum reauthorization, Congress needs to take a balanced approach that includes concern for federal revenues, but also considers the much larger social welfare that is created when diverse users in various situations can access services enabled by both licensed and unlicensed spectrum.
Licenced, Unlicensed, & Shared Spectrum
Most spectrum is licensed by the FCC to certain users. Licensees pay fees to the FCC for the exclusive right to transmit on an assigned frequency within a given geographical area. A license holder has the right to exclude others from accessing the assigned frequency and to be free from harmful interference from other service providers. In the private sector, radio and television broadcasters, as well as mobile-phone services, operate with licensed spectrum. Their right to exclude others and to be free from interference provides improved service and greater reliability in distributing their broadcasts or providing communication services.
SOURCE: U.S. Commerce Department
Licensing gets spectrum into the hands of those who are well-positioned—both technologically and financially—to deploy spectrum for commercial uses. Because a licensee has the right to exclude other operators from the licensed band, licensing offers the operator flexibility to deploy their network in ways that effectively mitigate potential interference. In addition, the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending. Spectrum auctions have reportedly raised more than $230 billion for the U.S. Treasury since their inception.
Unlicensed spectrum can be seen as an open-access resource available to all users without charge. Users are free to use as much of this spectrum as they wish, so long as it’s with FCC-certified equipment operating at authorized power levels. The most well-known example of unlicensed operations is Wi-Fi, a service that operates in the 2.4 GHz, and 5.8 GHz bands and is employed by millions of U.S. users across millions of devices in millions of locations each day. Wi-Fi isn’t the only use for unlicensed spectrum; it covers a range of devices such as those relying on Bluetooth, as well as personal medical devices, appliances, and a wide range of Internet-of-Things devices.
As with any common resource, each user’s service-quality experience depends on how much spectrum is used by all. In particular, if the demand for spectrum at a particular place and point in time exceeds the available supply, then all users will experience diminished service quality. If you’ve been in a crowded coffee shop and complained that “the Internet sucks here,” it’s more than likely that demand for the shop’s Wi-Fi service is greater than the capacity of the Wi-Fi router.
SOURCE: Wall Street Journal
While there can be issues of interference among wireless devices, it’s not the Wild West. Equipment and software manufacturers have invested in developing technologies that work in noisy environments and in proximity to other products. The existence of sufficient unlicensed and shared spectrum allows for innovation with new technologies and services. Firms don’t have to make large upfront investments in licenses to research, develop, and experiment with their innovations. These innovations benefit consumers, businesses, and manufacturers. According to the Wi-Fi Alliance, the success of Wi-Fi has been enormous:
The United States remains one of the countries with the widest Wi-Fi adoption and use. Cisco estimates 33.5 million paid Wi-Fi access points, with estimates for free public Wi-Fi sites at around 18.6 million. Eighty-five percent of United States broadband subscribers have Wi-Fi capability at home, and mobile users connect to the internet through Wi-Fi over cellular networks more than 55 percent of the time. The United States also has a robust manufacturing ecosystem and increasing enterprise use, which have aided the rise in the value of Wi-Fi. The total economic value of Wi-Fi in 2021 is $995 billion.
The Need for Balanced Spectrum Policy
To be sure, both licensed and unlicensed spectrum play crucial roles and serve different purposes, sometimes as substitutes for one another and sometimes as complements. It can’t therefore be said that one approach is “better” than the other, as there is undeniable economic value to both.
That’s why it’s been said that the optimal amount of unlicensed spectrum is somewhere between 0% and 100%. While that’s true, it’s unhelpful as a guide for policymakers, even if it highlights the challenges they face. Not only must they balance the competing interests of consumers, wireless providers, and electronics manufacturers, but they also have to keep their own self-interest in check, insofar as they are forever tempted to use spectrum auctions to raise revenue.
To this last point, it is likely that the “optimum” amount of unlicensed spectrum for society differs significantly from the amount that maximizes government auction revenues.
For simplicity, let’s assume “consumer welfare” is a shorthand for social welfare less government-auction revenues. In the (purely hypothetical) figure below, consumer welfare is maximized when about 56% of the available spectrum is licensed. Government auction revenues, however, are maximized when all available spectrum is licensed.
SOURCE: Authors
In this example, politicians have a keen interest in licensing more spectrum than is socially optimal. Doing so provides more revenues to the government without raising taxes. The additional costs passed on to individual consumers (or voters) would be so disperse as to be virtually undetectable. It’s a textbook case of concentrated benefits and diffused costs.
Of course, we can debate about the size, shape, and position of each of the curves, as well as where on the curve the United States currently sits. Nevertheless, available evidence indicates that the consumer welfare generated through use of unlicensed broadband will often exceed the revenue generated by spectrum auctions. For example, if the Wi-Fi Alliance’s estimate of $995 billion in economic value for Wi-Fi is accurate (or even in the ballpark), then the value of Wi-Fi alone is more than three times greater than the auction revenues received by the U.S. Treasury.
Of course, licensed-spectrum technology also provides tremendous benefit to society, but the basic basic point cannot be ignored: a congressional calculation that seeks simply to maximize revenue to the U.S. Treasury will almost certainly rob society of a great deal of benefit.
Conclusion
Licensed spectrum is obviously critical, and not just because it allows politicians to raise revenue for the federal government. Cellular technology and other licensed applications are becoming even more important as a wide variety of users opt for cellular-only Internet connections, or where fixed wireless over licensed spectrum is needed to reach remote users.
At the same time, shared and unlicensed spectrum has been a major success story, and promises to keep delivering innovation and greater connectivity in a wide variety of use cases. As we note above, the federal revenue generated from auctions should not be the only benefit counted. Unlicensed spectrum is responsible for tens of billions of dollars in direct value, and close to $1 trillion when accounting for its indirect benefits.
Ultimately, allocating spectrum needs to be a question of what most enhances consumer welfare. Raising federal revenue is great, but it is only one benefit that must be counted among a number of benefits (and costs). Any simplistic formula that pushes for maximizing a single dimension of welfare is likely to be less than ideal. As Congress considers further spectrum reauthorization, it needs to take seriously the need to encourage both private ownership of licensed spectrum, as well as innovative uses of unlicensed and shared spectrum.
States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).
All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.
There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.
Some NOFO Requirements May Impinge Broadband Deployment
The infrastructure bill’s statutory text declares that:
Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.
In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.
It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.
This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.
Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.
The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:
In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.
Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.
While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.
But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”
The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”
For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.
The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.
Focusing on the Truly Unserved and Underserved
The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.
Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.
Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:
For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”
In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.
Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.
Conclusion
The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.
Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.
In the U.S. system of dual federal and state sovereigns, a normative analysis reveals principles that could guide state antitrust-enforcement priorities, to promote complementarity in federal and state antitrust policy, and thereby advance consumer welfare.
Discussion
Positive analysis reveals that state antitrust enforcement is a firmly entrenched feature of American antitrust policy. The U.S. Supreme Court (1) has consistently held that federal antitrust law does not displace state antitrust law (see, for example, California v. ARC America Corp. (U.S., 1989) (“Congress intended the federal antitrust laws to supplement, not displace, state antitrust remedies”)); and (2) has upheld state antitrust laws even when they have some impact on interstate commerce (see, for example, Exxon Corp. v. Governor of Maryland (U.S., 1978)).
The normative question remains, however, as to what the appropriate relationship between federal and state antitrust enforcement should be. Should federal and state antitrust regimes be complementary, with state law enforcement enhancing the effectiveness of federal enforcement? Or should state antitrust enforcement compete with federal enforcement, providing an alternative “vision” of appropriate antitrust standards?
The generally accepted (until very recently) modern American consumer-welfare-centric antitrust paradigm (see here) points to the complementary approach as most appropriate. In other words, if antitrust is indeed the “magna carta” of American free enterprise (see United States v. Topco Associates, Inc., U.S. (U.S. 1972), and if consumer welfare is the paramount goal of antitrust (a position consistently held by the Supreme Court since Reiter v. Sonotone Corp., (U.S., 1979)), it follows that federal and state antitrust enforcement coexist best as complements, directed jointly at maximizing consumer-welfare enhancement. In recent decades it also generally has made sense for state enforcers to defer to U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) matter-specific consumer-welfare assessments. This conclusion follows from the federal agencies’ specialized resource advantage, reflected in large staffs of economic experts and attorneys with substantial industry knowledge.
The reality, nevertheless, is that while state enforcers often have cooperated with their federal colleagues on joint enforcement, state enforcement approaches historically have been imperfectly aligned with federal policy. That imperfect alignment has been at odds with consumer welfare in key instances. Certain state antitrust schemes, for example, continue to treat resale price maintenance (RPM) as per se illegal (see, for example, here), a position inconsistent with the federal consumer welfare-centric rule of reason approach (see Leegin Creative Leather Products, Inc. v. PSKS, Inc. (U.S., 2007)). The disparate treatment of RPM has a substantial national impact on business conduct, because commercially important states such as California and New York are among those that continue to flatly condemn RPM.
State enforcers also have from time to time sought to oppose major transactions that received federal antitrust clearance, such as several states’ unsuccessful opposition to the merger of Sprint and T-Mobile merger (see here). Although the states failed to block the merger, they did extract settlement concessions that imposed burdens on the merging parties, in addition to the divestiture requirements impose by the DOJ in settling the matter (see here). Inconsistencies between federal and state antitrust-enforcement decisions on cases of nationwide significance generate litigation waste and may detract from final resolutions that optimize consumer welfare.
If consumer-welfare optimization is their goal (which I believe it should be in an ideal world), state attorneys general should seek to direct their limited antitrust resources to their highest valued uses, rather than seeking to second guess federal antitrust policy and enforcement decisions.
An optimal approach might focus first and foremost on allocating state resources to combat primarily intrastate competitive harms that are clear and unequivocal (such as intrastate bid rigging, hard core price fixing, and horizontal market division). This could free up federal resources to focus on matters that are primarily interstate in nature, consistent with federalism. (In this regard, see a thoughtful proposal by D. Bruce Johnsen and Moin A. Yaha.)
Second, state enforcers could also devote some resources to assist federal enforcers in developing state-specific evidence in support of major national cases. (This would allow state attorneys general to publicize their “big case” involvement in a productive manner.)
Third, but not least, competition advocacy directed at the removal of anticompetitive state laws and regulations could prove an effective means of seeking to improve the competitive climate within individual states (see, for example, here). State antitrust enforcers could advance advocacy through amicus curiae briefs, and (where politically feasible) through interventions (perhaps informal) with peer officials who oversee regulation. Subject to this general guidance, the nature of state antitrust resource allocations would depend upon the specific competitive problems particular to each state.
Of course, in the real world, public choice considerations and rent seeking may at times influence antitrust enforcement decision-making by state (and federal) officials. Nonetheless, the capsule idealized normative summary of a suggested ideal state antitrust-enforcement protocol is useful in that it highlights how state enforcers could usefully complement (assumed) sound federal antitrust initiatives.
Great minds think alike. A well-crafted and much more detailed normative exploration of ideal state antitrust enforcement is found in a recently released Pelican Institute policy brief by Ted Bolema and Eric Peterson. Entitled The Proper Role for States in Antitrust Lawsuits, the brief concludes (in a manner consistent with my observations):
This review of cases and leading commentaries shows that states should focus their involvement in antitrust cases on instances where:
· they have unique interests, such as local price-fixing
· play a unique role, such as where they can develop evidence about how alleged anticompetitive behavior uniquely affects local markets
· they can bring additional resources to bear on existing federal litigation.
States can also provide a useful check on overly aggressive federal enforcement by providing courts with a traditional perspective on antitrust law — a role that could become even more important as federal agencies aggressively seek to expand their powers. All of these are important roles for states to play in antitrust enforcement, and translate into positive outcomes that directly benefit consumers.
Conversely, when states bring significant, novel antitrust lawsuits on their own, they don’t tend to benefit either consumers or constituents. These novel cases often move resources away from where they might be used more effectively, and states usually lose (as with the recent dismissal with prejudice of a state case against Facebook). Through more strategic antitrust engagement, with a focus on what states can do well and where they can make a positive difference antitrust enforcement, states would best serve the interests of their consumers, constituents, and taxpayers.
Conclusion
Under a consumer-welfare-centric regime, an appropriate role can be identified for state antitrust enforcement that would helpfully complement federal efforts in an optimal fashion. Unfortunately, in this tumultuous period of federal antitrust policy shifts, in which the central role of the consumer welfare standard has been called into question, it might appear fatuous to speculate on the ideal melding of federal and state approaches to antitrust administration. One should, however, prepare for the time when a more enlightened, economically informed approach will be reinstituted. In anticipation of that day, serious thinking about antitrust federalism should not be neglected.
Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.
This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.
But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.
The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.
The Bertrand Paradox
In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).
Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.
By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:
There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.
This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):
If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.
This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).
But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:
On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.
All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).
The Theory of Contestable Markets
Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.
Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:
In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.
For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.
In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.
Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.
This has far-reaching implications for antitrust policy, as Baumol was quick to point out:
This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.
Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.
To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration.
How Contestable Are Digital Markets?
The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.
The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.
Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.
First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.
These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.
Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.
A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).
Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.
Conclusion
Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.
Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.
Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.
Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).
Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.
In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.
It’s a telecom tale as old as time: industry gets a prime slice of radio spectrum and falls in love with it, only to take it for granted. Then, faced with the reapportionment of that spectrum, it proceeds to fight tooth and nail (and law firm) to maintain the status quo.
In that way, the decision by the Intelligent Transportation Society of America (ITSA) and the American Association of State Highway and Transportation Officials (AASHTO) to seek judicial review of the Federal Communications Commission’s (FCC) order reassigning the 5.9GHz band was right out of central casting. But rather than simply asserting that the FCC’s order was arbitrary, ITSA foreshadowed many of the arguments that it intends to make against the order.
There are three arguments of note, and should ITSA win on the merits of any of those arguments, it would mark a significant departure from the way spectrum is managed in the United States.
First, ITSA asserts that the U.S. Department of Transportation (DOT), by virtue of its role as the nation’s transportation regulator, retains authority to regulate radio spectrum as it pertains to DOT programs, not the FCC. Of course, this notion is absurd on its face. Congress mandated that the FCC act as the exclusive regulator of non-federal uses of wireless. This leaves the FCC free to—in the words of the Communications Act—“encourage the provision of new technologies and services to the public” and to “provide to all Americans” the best communications networks possible.
In contrast, other federal agencies with some amount of allocated spectrum each focus exclusively on a particular mission, without regard to the broader concerns of the country (including uses by sister agencies or the states). That’s why, rather than allocate the spectrum directly to DOT, the statute directs the FCC to consider allocating spectrum for Intelligent Transportation Systems and to establish the rules for their spectrum use. The statute directs the FCC to consult with the DOT, but leaves final decisions to the FCC.
Today’s crowded airwaves make it impossible to allocate spectrum for 5G, Wi-Fi 6, and other innovative uses without somehow impacting spectrum used by a federal agency. Accepting the ITSA position would fundamentally alter the FCC’s role relative to other agencies with an interest in the disposition of spectrum, rendering the FCC a vestigial regulatory backwater subject to non-expert veto. As a matter of policy, this would effectively prevent the United States from meeting the growing challenges of our exponentially increasing demand for wireless access.
It would also put us at a tremendous disadvantage relative to other countries. International coordination of wireless policy has become critical in the global economy, with our global supply chains and wireless equipment manufacturers dependent on global standards to drive economies of scale and interoperability around the globe. At the last World Radio Conference in 2019, interagency spectrum squabbling significantly undermined the U.S. negotiation efforts. If agencies actually had veto power over the FCC’s spectrum decisions, the United States would have no way to create a coherent negotiating position, let alone to advocate effectively for our national interests.
Second, though relatedly, ITSA asserts that the FCC’s engineers failed to appropriately evaluate safety impacts and interference concerns. It’s hard to see how this could be the case, given both the massive engineering record and the FCC’s globally recognized expertise in spectrum. As a general rule, the FCC leads the world in spectrum engineering (there is a reason things like mobile service and Wi-Fi started in the United States). No other federal agency (including DOT) has such extensive, varied, and lengthy experience with interference analysis. This allows the FCC to develop broadly applicable standards to protect all emergency communications. Every emergency first responder relies on this expertise every day that they use wireless communications to save lives. Here again, we see the wisdom in Congress delegating to a single expert agency the task of finding the right balance to meet all our wireless public-safety needs.
Third, the petition ambitiously asks the court to set aside all parts of the order, with the exception of the one portion that ITSA likes: freeing the top 30MHz of the band for use by C-V2X on a permanent basis. Given their other arguments, this assertion strains credulity. Either the FCC makes the decisions, or the DOT does. Giving federal agencies veto power over FCC decisions would be bad enough. Allowing litigants to play federal agencies against each other so they can mix and match results would produce chaos and/or paralysis in spectrum policy.
In short, ITSA is asking the court to fundamentally redefine the scope of FCC authority to administer spectrum when other federal agencies are involved; to undermine deference owed to FCC experts; and to do all of this while also holding that the FCC was correct on the one part of the order with which the complainants agree. This would make future progress in wireless technology effectively impossible.
We don’t let individual states decide which side of the road to drive on, or whether red or some other color traffic light means stop, because traffic rules only work when everybody follows the same rules. Wireless policy can only work if one agency makes the rules. Congress says that agency is the FCC. The courts (and other agencies) need to remember that.
AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention. The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.
The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded.
This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.
The DOJ’s Antitrust Case against the Transaction
The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects.
This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power.
Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers.
Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.
The Fundamental Flaws in the DOJ’s Antitrust Case
The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons.
False Assumption #1
The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities. Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.
False Assumption #2
Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services. Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.
The Antitrust Case for the Transaction
When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.
Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion.
In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.
The Enduring Virtues of Antitrust Prudence
AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.
Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.
In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.
The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.
The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.
Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company.
But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.
Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.
The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention).
One thing that really comes through in @leah_nylen excellent set of articles on the FTC decision to abandon the GOOG lawsuit is just how *wrong* the economists were. /1
16. But the biggest reason is the point of the post. Economists. The commission’s antitrust economists made a very strong, and entirely wrong, argument against the case, which in retrospect rested on a set of laughably inaccurate predictions. pic.twitter.com/L5ecQcurAd
Reading through @leah_nylen's incredible scoop on how FTC dropped the ball on Google in 2013. This passage from the economics memo stands out as emblematic of the ways economists have repeatedly fumbled antitrust enforcement. pic.twitter.com/lKBgVfxI4H
Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:
But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:
— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.
— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.
— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.
— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.
The report thus asserts that:
The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.
That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]
What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard.
Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark.
Decisions Under Uncertainty
In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.
Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong.
Consider the following passage from FTC economist Ken Heyer’s memo:
The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]
In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.
Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?
In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today.
Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here).
Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.
To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets.
In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.
Putting Erroneous Predictions in Context
So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.
But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.
This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.
In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.
Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:
The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.
FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.
This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.
But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call:
When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.
The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:
Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”
It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.
Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation).
In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.
The FTC Lawyers’ Weak Case for Prosecuting Google
At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.
Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:
A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.
If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.
The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.
Moreover, as Ben Thompson argues in his Stratechery newsletter:
The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.
This difficulty was deftly highlighted by Heyer’s memo:
If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]
Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.
And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.
Google’s ‘revenue-sharing’ agreements
It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:
FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.
The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance.
To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).
Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:
This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.
This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:
[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.
Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.
Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):
Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.
Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.
Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system.
In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.
Self-preferencing
Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:
When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers
The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:
Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites….
…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]
More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:
A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control….
…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….
…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk?
Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time.
Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.
Competitor Harm Is Not an Indicator of the Need for Intervention
Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:
Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.
But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents.
This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:
Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives….
…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest….
…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.
Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:
They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.
Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.
Conclusion
When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.
But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.
In current discussions of technology markets, few words are heard more often than “platform.” Initial public offering (IPO) prospectuses use “platform” to describe a service that is bound to dominate a digital market. Antitrust regulators use “platform” to describe a service that dominates a digital market or threatens to do so. In either case, “platform” denotes power over price. For investors, that implies exceptional profits; for regulators, that implies competitive harm.
Conventional wisdom holds that platforms enjoy high market shares, protected by high barriers to entry, which yield high returns. This simple logic drives the market’s attribution of dramatically high valuations to dramatically unprofitable businesses and regulators’ eagerness to intervene in digital platform markets characterized by declining prices, increased convenience, and expanded variety, often at zero out-of-pocket cost. In both cases, “burning cash” today is understood as the path to market dominance and the ability to extract a premium from consumers in the future.
This logic is usually wrong.
The Overlooked Basics of Platform Economics
To appreciate this perhaps surprising point, it is necessary to go back to the increasingly overlooked basics of platform economics. A platform can refer to any service that matches two complementary populations. A search engine matches advertisers with consumers, an online music service matches performers and labels with listeners, and a food-delivery service matches restaurants with home diners. A platform benefits everyone by facilitating transactions that otherwise might never have occurred.
A platform’s economic value derives from its ability to lower transaction costs by funneling a multitude of individual transactions into a single convenient hub. In pursuit of minimum costs and maximum gains, users on one side of the platform will tend to favor the most popular platforms that offer the largest number of users on the other side of the platform. (There are partial exceptions to this rule when users value being matched with certain typesof other users, rather than just with more users.) These “network effects” mean that any successful platform market will always converge toward a handful of winners. This positive feedback effect drives investors’ exuberance and regulators’ concerns.
There is a critical point, however, that often seems to be overlooked.
Market share only translates into market power to the extent the incumbent is protected against entry within some reasonable time horizon. If Warren Buffett’s moat requirement is not met, market share is immaterial. If XYZ.com owns 100% of the online pet food delivery market but entry costs are asymptotic, then market power is negligible. There is another important limiting principle. In platform markets, the depth of the moat depends not only on competitors’ costs to enter the market, but users’ costs in switching from one platform to another or alternating between multiple platforms. If users can easily hop across platforms, then market share cannot confer market power given the continuous threat of user defection. Put differently: churn limits power over price.
Contrary to natural intuitions, this is why a platform market consisting of only a few leaders can still be intensely competitive, keeping prices low (down to and including $0) even if the number of competitors is low. It is often asserted, however, that users are typically locked into the dominant platform and therefore face high switching costs, which therefore implicitly satisfies the moat requirement. If that is true, then the “high churn” scenario is a theoretical curiosity and a leading platform’s high market share would be a reliable signal of market power. In fact, this common assumption likely describes the atypical case.
AWS and the Cloud Data-Storage Market
This point can be illustrated by considering the cloud data-storage market. This would appear to be an easy case where high switching costs (due to the difficulty in shifting data among storage providers) insulate the market leader against entry threats. Yet the real world does not conform to these expectations.
While Amazon Web Services pioneered the $100 billion-plus market and is still the clear market leader, it now faces vigorous competition from Microsoft Azure, Google Cloud, and other data-storage or other cloud-related services. This may reflect the fact that the data storage market is far from saturated, so new users are up for grabs and existing customers can mitigate lock-in by diversifying across multiple storage providers. Or it may reflect the fact that the market’s structure is fluid as a function of technological changes, enabling entry at formerly bundled portions of the cloud data-services package. While it is not always technologically feasible, the cloud storage market suggests that users’ resistance to platform capture can represent a competitive opportunity for entrants to challenge dominant vendors on price, quality, and innovation parameters.
The Surprising Instability of Platform Dominance
The instability of leadership positions in the cloud storage market is not exceptional.
Consider a handful of once-powerful platforms that were rapidly dethroned once challenged by a more efficient or innovative rival: Yahoo and Alta Vista in the search-engine market (displaced by Google); Netscape in the browser market (displaced by Microsoft’s Internet Explorer, then displaced by Google Chrome); Nokia and then BlackBerry in the mobile wireless-device market (displaced by Apple and Samsung); and Friendster in the social-networking market (displaced by Myspace, then displaced by Facebook). AOL was once thought to be indomitable; now it is mostly referenced as a vintage email address. The list could go on.
Overestimating platform dominance—or more precisely, assuming platform dominance without close factual inquiry—matters because it promotes overestimates of market power. That, in turn, cultivates both market and regulatory bubbles: investors inflate stock valuations while regulators inflate the risk of competitive harm.
DoorDash and the Food-Delivery Services Market
Consider the DoorDash IPO that launched in early December 2020. The market’s current approximately $50 billion valuation of a business that has been almost consistently unprofitable implicitly assumes that DoorDash will maintain and expand its position as the largest U.S. food-delivery platform, which will then yield power over price and exceptional returns for investors.
There are reasons to be skeptical. Even where DoorDash captures and holds a dominant market share in certain metropolitan areas, it still faces actual and potential competition from other food-delivery services, in-house delivery services (especially by well-resourced national chains), and grocery and other delivery services already offered by regional and national providers. There is already evidence of these expected responses to DoorDash’s perceived high delivery fees, a classic illustration of the disciplinary effect of competitive forces on the pricing choices of an apparently dominant market leader. These “supply-side” constraints imposed by competitors are compounded by “demand-side” constraints imposed by customers. Home diners incur no more than minimal costs when swiping across food-delivery icons on a smartphone interface, casting doubt that high market share is likely to translate in this context into market power.
Deliveroo and the Costs of Regulatory Autopilot
Just as the stock market can suffer from delusions of platform grandeur, so too some competition regulators appear to have fallen prey to the same malady.
A vivid illustration is provided by the 2019 decision by the Competition Markets Authority (CMA), the British competition regulator, to challenge Amazon’s purchase of a 16% stake in Deliveroo, one of three major competitors in the British food-delivery services market. This intervention provides perhaps the clearest illustration of policy action based on a reflexive assumption of market power, even in the face of little to no indication that the predicate conditions for that assumption could plausibly be satisfied.
Far from being a dominant platform, Deliveroo was (and is) a money-losing venture lagging behind money-losing Just Eat (now Just Eat Takeaway) and Uber Eats in the U.K. food-delivery services market. Even Amazon had previously closed its own food-delivery service in the U.K. due to lack of profitability. Despite Deliveroo’s distressed economic circumstances and the implausibility of any market power arising from Amazon’s investment, the CMA nonetheless elected to pursue the fullest level of investigation. While the transaction was ultimately approved in August 2020, this intervention imposed a 15-month delay and associated costs in connection with an investment that almost certainly bolstered competition in a concentrated market by funding a firm reportedly at risk of insolvency. This is the equivalent of a competition regulator driving in reverse.
Concluding Thoughts
There seems to be an increasingly common assumption in commentary by the press, policymakers, and even some scholars that apparently dominant platforms usually face little competition and can set, at will, the terms of exchange. For investors, this is a reason to buy; for regulators, this is a reason to intervene. This assumption is sometimes realized, and, in that case, antitrust intervention is appropriate whenever there is reasonable evidence that market power is being secured through something other than “competition on the merits.” However, several conditions must be met to support the market power assumption without which any such inquiry would be imprudent. Contrary to conventional wisdom, the economics and history of platform markets suggest that those conditions are infrequently satisfied.
Without closer scrutiny, reflexively equating market share with market power is prone to lead both investors and regulators astray.
In a constructive development, the Federal Trade Commission has joined its British counterpart in investigating Nvidia’s proposed $40 billion acquisition of chip designer Arm, a subsidiary of Softbank. Arm provides the technological blueprints for wireless communications devices and, subject to a royalty fee, makes those crown-jewel assets available to all interested firms. Notwithstanding Nvidia’s stated commitment to keep the existing policy in place, there is an obvious risk that the new parent, one of the world’s leading chip makers, would at some time modify this policy with adverse competitive effects.
Ironically, the FTC is likely part of the reason that the Nvidia-Arm transaction is taking place.
Since the mid-2000s, the FTC and other leading competition regulators (except for the U.S. Department of Justice’s Antitrust Division under the leadership of former Assistant Attorney General Makan Delrahim) have intervened extensively in licensing arrangements in wireless device markets, culminating in the FTC’s recent failed suit against Qualcomm. The Nvidia-Arm transaction suggests that these actions may simply lead chip designers to abandon the licensing model and shift toward structures that monetize chip-design R&D through integrated hardware and software ecosystems. Amazon and Apple are already undertaking chip innovation through this model. Antitrust action that accelerates this movement toward in-house chip design is likely to have adverse effects for the competitive health of the wireless ecosystem.
How IP Licensing Promotes Market Access
Since its inception, the wireless communications market has relied on a handful of IP licensors to supply device producers and other intermediate users with a common suite of technology inputs. The result has been an efficient division of labor between firms that specialize in upstream innovation and firms that specialize in production and other downstream functions. Contrary to the standard assumption that IP rights limit access, this licensing-based model ensures technology access to any firm willing to pay the royalty fee.
Efforts by regulators to reengineer existing relationships between innovators and implementers endanger this market structure by inducing innovators to abandon licensing-based business models, which now operate under a cloud of legal insecurity, for integrated business models in which returns on R&D investments are captured internally through hardware and software products. Rather than expanding technology access and intensifying competition, antitrust restraints on licensing freedom are liable to limit technology access and increase market concentration.
Regulatory Intervention and Market Distortion
This interventionist approach has relied on the assertion that innovators can “lock in” producers and extract a disproportionate fee in exchange for access. This prediction has never found support in fact. Contrary to theoretical arguments that patent owners can impose double-digit “royalty stacks” on device producers, empirical researchers have repeatedly found that the estimated range of aggregate rates lies in the single digits. These findings are unsurprising given market performance over more than two decades: adoption has accelerated as quality-adjusted prices have fallen and innovation has never ceased. If rates were exorbitant, market growth would have been slow, and the smartphone would be a luxury for the rich.
Despite these empirical infirmities, the FTC and other competition regulators have persisted in taking action to mitigate “holdup risk” through policy statements and enforcement actions designed to preclude IP licensors from seeking injunctive relief. The result is a one-sided legal environment in which the world’s largest device producers can effectively infringe patents at will, knowing that the worst-case scenario is a “reasonable royalty” award determined by a court, plus attorneys’ fees. Without any credible threat to deny access even after a favorable adjudication on the merits, any IP licensor’s ability to negotiate a royalty rate that reflects the value of its technology contribution is constrained.
Assuming no change in IP licensing policy on the horizon, it is therefore not surprising that an IP licensor would seek to shift toward an integrated business model in which IP is not licensed but embedded within an integrated suite of products and services. Or alternatively, an IP licensor entity might seek to be acquired by a firm that already has such a model in place. Hence, FTC v. Qualcomm leads Arm to Nvidia.
The Error Costs of Non-Evidence-Based Antitrust
These counterproductive effects of antitrust intervention demonstrate the error costs that arise when regulators act based on unverified assertions of impending market failure. Relying on the somewhat improbable assumption that chip suppliers can dictate licensing terms to device producers that are among the world’s largest companies, competition regulators have placed at risk the legal predicates of IP rights and enforceable contracts that have made the wireless-device market an economic success. As antitrust risk intensifies, the return on licensing strategies falls and competitive advantage shifts toward integrated firms that can monetize R&D internally through stand-alone product and service ecosystems.
Far from increasing competitiveness, regulators’ current approach toward IP licensing in wireless markets is likely to reduce it.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Kristian Stout is director of innovation policy for the International Center for Law & Economics.]
One of the themes that has run throughout this symposium has been that, throughout his tenure as both a commissioner and as chairman, Ajit Pai has brought consistency and careful analysis to the Federal Communications Commission (McDowell, Wright). The reflections offered by the various authors in this symposium make one thing clear: the next administration would do well to learn from the considered, bipartisan, and transparent approach to policy that characterized Chairman Pai’s tenure at the FCC.
The following are some of the more specific lessons that can be learned from Chairman Pai. In an important sense, he laid the groundwork for his successful chairmanship when he was still a minority commissioner. His thoughtful dissents were rooted in consistent, clear policy arguments—a practice that both charted how he would look at future issues as chairman and would help the public to understand exactly how he would approach new challenges before the FCC (McDowell, Wright).
One of the most public instances of Chairman Pai’s consistency (and, as it turns out, his bravery) was with respect to net neutrality. From his dissent in the Title II Order, through his commission’s Restoring Internet Freedom Order, Chairman Pai focused on the actual welfare of consumers and the factors that drive network growth and adoption. As Brent Skorup noted, “Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition.” The result of giving in to the Title II advocates would have been to draw the FCC into a quagmire of mass-media regulation that would ultimately harm free expression and broadband deployment in the United States.
Chairman Pai’s vision worked out (Skorup, May, Manne, Hazlett). Despite prognostications of the “death of the internet” because of the Restoring Internet Freedom Order, available evidence suggests that industry investment grew over Chairman Pai’s term. More Americans are connected to broadband than ever before.
Relatedly, Chairman Pai was a strong supporter of liberalizing media-ownership rules that long had been rooted in 20th century notions of competition (Manne). Such rules systematically make it harder for smaller media outlets to compete with large news aggregators and social-media platforms. As Geoffrey Manne notes:
Consistent with his unwavering commitment to promote media competition… Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers.
This was a bold move for Chairman Pai—in essence, he permitted more local concentration by, e.g., allowing the purchase of a newspaper by a local television station that previously would have been forbidden. By allowing such combinations, the FCC enabled failing local news outlets to shore up their losses and continue to compete against larger, better-resourced organizations. The rule changes are in a case pending before the Supreme Court; should the court find for the FCC, the competitive outlook for local media looks much better thanks to Chairman Pai’s vision.
Chairman Pai’s record on spectrum is likewise impressive (Cooper, Hazlett). The FCC’s auctions under Chairman Pai raised more money and freed more spectrum for higher value uses than any previous commission (Feld, Hazlett). But there is also a lesson in how subsequent administrations can continue what Chairman Pai started. Unlicensed use, for instance, is not free or costless in its maintenance, and Tom Hazlett believes that there is more work to be done in further liberalizing access to the related spectrum—liberalizing in the sense of allowing property rights and market processes to guide spectrum to its highest use:
The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models.
And to a large extent this is the model that Chairman Pai set down, from the issuance of the 12 GHZ NPRM to consider whether those spectrum bands could be opened up for wireless use, to the L-Band Order, where the commission worked hard to reallocate spectrum rights in ways that would facilitate more productive uses.
The controversial L-Band Order was another example of where Chairman Pai displayed both political acumen as well as an apolitical focus on improving spectrum policy (Cooper). Political opposition was sharp and focused after the commission finalized its order in April 2020. Nonetheless, Chairman Pai was deftly able to shepherd the L-Band Order and guarantee that important spectrum was made available for commercial wireless use.
As a native of Kansas, rural broadband rollout ranked highly in the list of priorities at the Pai FCC, and his work over the last four years is demonstrative of this pride of place (Hurwitz, Wright). As Gus Hurwitz notes, “the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity.”
Further, other work, like the recently completed Rural Digital Opportunity Fund auction and the 5G fund provide the necessary policy framework with which to extend greater connectivity to rural America. As Josh Wright notes, “Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind.” This focus on closing the digital divide yielded gains in connectivity in places outside of traditional rural American settings, such as tribal lands, the U.S. Virgin Islands, and Puerto Rico (Wright).
But perhaps one of Chairman Pai’s best and (hopefully) most lasting contributions will be de-politicizing the FCC and increasing the transparency with which it operated. In contrast to previous administrations, the Pai FCC had an overwhelmingly bipartisan nature, with many bipartisan votes being regularly taken at monthly meetings (Jamison). In important respects, it was this bipartisan (or nonpartisan) nature that was directly implicated by Chairman Pai championing the Office of Economics and Analytics at the commission. As many of the commentators have noted (Jamison, Hazlett, Wright, Ellig) the OEA was a step forward in nonpolitical, careful cost-benefit analysis at the commission. As Wright notes, Chairman Pai was careful to not just hire a bunch of economists, but rather to learn from other agencies that have better integrated economics, and to establish a structure that would enable the commission’s economists to materially contribute to better policy.
We were honored to receive a post from Jerry Ellig just a day before he tragically passed away. As chief economist at the FCC from 2017-2018, he was in a unique position to evaluate past practice and participate in the creation of the OEA. According to Ellig, past practice tended to treat the work of the commission’s economists as a post-hoc gloss on the work of the agency’s attorneys. Once conclusions were reached, economics would often be backfilled in to support those conclusions. With the establishment of the OEA, economics took a front-seat role, with staff of that office becoming a primary source for information and policy analysis before conclusions were reached. As Wright noted, the Federal Trade Commission had adopted this approach. With the FCC moving to do this as well, communications policy in the United States is on much sounder footing thanks to Chairman Pai.
Not only did Chairman Pai push the commission in the direction of nonpolitical, sound economic analysis but, as many commentators note, he significantly improved the process at the commission (Cooper, Jamison, Lyons). Chief among his contributions was making it a practice to publish proposed orders weeks in advance, breaking with past traditions of secrecy around draft orders, and thereby giving the public an opportunity to see what the commission intended to do.
Critics of Chairman Pai’s approach to transparency feared that allowing more public view into the process would chill negotiations between the commissioners behind the scenes. But as Daniel Lyons notes, the chairman’s approach was a smashing success:
The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan compared to 33% and 69.9%, respectively, under Chairman Wheeler.
Other reforms from Chairman Pai helped open the FCC to greater scrutiny and a more transparent process, including limiting editorial privileges on staff on an order’s text, and by introducing the use of a simple “fact sheet” to explain orders (Lyons).
I found one of the most interesting insights into the character of Chairman Pai, was his willingness to reverse course and take risks to ensure that the FCC promoted innovation instead of obstructing it by relying on received wisdom (Nachbar). For instance, although he was initially skeptical of the prospects of Space X to introduce broadband through its low-Earth-orbit satellite systems, under Chairman Pai, the Starlink beta program was included in the RDOF auction. It is not clear whether this was a good bet, Thomas Nachbar notes, but it was a statement both of the chairman’s willingness to change his mind, as well as to not allow policy to remain in a comfortable zone that excludes potential innovation.
The next chair has an awfully big pair of shoes (or one oversized coffee mug) to fill. Chairman Pai established an important legacy of transparency and process improvement, as well as commitment to careful, economic analysis in the business of the agency. We will all be well-served if future commissions follow in his footsteps.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Thomas W. Hazlett is the H.H. Macaulay Endowed Professor of Economicsat Clemson University.]
Disclosure: The one time I met Ajit Pai was when he presented a comment on my book, “The Political Spectrum,” at a Cato Institute forum in 2018. He was gracious, thorough, and complimentary. He said that while he had enjoyed the volume, he hoped not to appear in upcoming editions. I took that to imply that he read the book as harshly critical of the Federal Communications Commission. Well, when merited, I concede. But it left me to wonder if he had followed my story to its end, as I document the success of reforms launched in recent decades and advocate their extension. Inclusion in a future edition might work out well for a chairman’s legacy. Or…
While my comment here focuses on radio-spectrum allocation, there was a notable reform achieved during the Pai FCC that touches on the subject, even if far more general in scope. In January 2018, the commission voted to initiate an Office of Economics and Analytics.[1] The organizational change was expeditiously instituted that same year, with the new unit stood up under the leadership of FCC economist Giulia McHenry.[2] I long proposed an FCC “Office of Economic Analysis” on the grounds that it had a reasonable prospect of improving evidence-based policymaking, allowing cost-benefit calculations to be made in a more professional, independent, and less political context.[3] I welcome this initiative by the Pai FCC and look forward to the empirical test now underway.[4]
Big Picture
Spectrum policy had notable triumphs under Chairman Pai but was—as President Carter dubbed the Vietnam War—an “incomplete success.” The main cause for celebration was the campaign to push spectrum-access rights into the marketplace. Pai’s public position was straightforward: “Our spectrum strategy calls for making low-band, mid-band, and high-band airwaves available for flexible use,” he wrote in an FCC blog post on June 19, 2018. But the means used by regulators to pursue that policy agenda repeatedly, historically prove determinative. The Pai FCC traveled pathways both effective and ineffective, and we should learn from either. The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models. The traditional spectrum-allocation approach is to permit exactly what the FCC finds to be the best use of spectrum, but this assumes knowledge about the value of alternatives the regulator does not possess. Moreover, it assumes away the costs of regulators imposing their solutions over and above a competitive process that might have less direction but more freedom. In a 2017 notice, the FCC displayed the progress we have made in departing from administrative control, when it sought guidance from private sector commenters this way:
“Are there opportunities to incentivize relocation or repacking of incumbent licensees to make spectrum available for flexible broadband use?
We seek comment on whether auctions … could be used to increase the availability of flexible use spectrum?”
By focusing on how rights—not markets—should be structured, the FCC may side-step useless food fights and let social progress flow.[5]
Progress
Spectrum-allocation results were realized. Indeed, when one looks at the pattern in licensed and unlicensed allocations for “flexible use” under 10 GHz, the recent four-year interval coincides with generous increases, both absolutely and from trend. See Figure 1. These data feature expansions in bandwidth via liberal licenses that include 70 MHz for CBRS (3.5 GHz band), with rights assigned in Auction 105 (2020), and 280 MHz (3.7 – 3.98 GHz) assigned in Auction 107 (2020-21, soon to conclude). The 70 MHz added via Auction 1002 (600 MHz) in 2017 was accounted for during the previous FCC, but substantial bandwidth in Auctions 101, 102, and 103 was added in the millimeter wave bands (not shown in Figure 1, which focuses on low- and mid-band rights).[6] Meanwhile, multiple increments of unlicensed spectrum allocations were made in 2020: 30 MHz shifted from the Intelligent Transportation Services set-aside (5.9 GHz) in 2020, 80 MHz of CBRS in 2020, and 1,200 MHz (6 GHz) dedicated to Wi-Fi type services in 2020.[7] Substantial millimeter wave frequency space was previously set aside for unlicensed operations in 2016.[8]
Source: FCC and author’s calculations.
First, that’s not the elephant in the room. Auction 107 has assigned licenses allocated 280 MHz of flexible-use mid-band spectrum, producing at least $94 billion in gross bids (of which about $13 billion will be paid to incumbent satellite licensees to reconfigure their operations so as to occupy just 200 MHz, rather than 500 MHz, of the 3.7 – 4.2 GHz band).[9] This crushes previous FCC sales; indeed, it constitutes about 42% of all auction receipts:
FCC auction receipts, 2020 (Auctions 103 and 105): $12.1 billion
FCC auction winning bids, 2020 (Auction 107): $94 billion (gross bids including relocation costs, incentive payments, and before Assignment Phase payments)
The addition of the 280 MHz to existing flexible-use spectrum suitable for mobile (aka, Commercial Mobile Radio Services – CMRS) is the largest increment ever released. It will compose about one-fourth of the low- and mid-band frequencies available via liberal licenses. This constitutes a huge advance with respect to 5G deployments, but going much further—promoting competition, innovation in apps and devices, the Internet of Things, and pushing the technological envelope toward 6G and beyond. Notably, the U.S. has uniquely led this foray to a new frontier in spectrum allocation.
The FCC deserves praise for pushing this proceeding to fruition. So, here it is. The C-Band is a very big deal and a major policy success. And more: in Auction 107, the commission very wisely sold overlay rights. It did not wait for administrative procedures to reconfigure wireless use, tightly supervising new “sharing” of the band, but (a) accepted the incumbents’ basic strategy for reallocation, (b) sold new prospective rights to high bidders, subject to protection of incumbents, (c) used a fraction of proceeds to fund incumbents cooperating with the reallocation, plussing-up payments when hitting deadlines, and (d) implicitly relied on the new licensees to push the relocation process forward.
Challenges
It is interesting that the FCC sort of articulated this useful model, and sort of did not:
For a successful public auction of overlay licenses in the 3.7-3.98 GHz band, bidders need to know before an auction commences when they will get access to that currently occupied spectrum as well as the costs they will incur as a condition of their overlay license. (FCC C-Band Order [Feb. 7, 2020], par. 110)
A germ of truth, but note: Auction 107 also demonstrated just the reverse. Rights were sold prior to clearing the airwaves and bidders—while liable for “incentive payments”—do not know with certainty when the frequencies will be available for their use. Risk is embedded, as it is widely in financial assets (corporate equity shares are efficiently traded despite wide disagreement on future earnings), and yet markets perform. Indeed, the “certainty” approach touted by the FCC in their language about a “successful public auction” has long deterred efficient reallocations, as the incumbents’ exiting process holds up arrival of the entrants. The central feature of the C-Band reallocation was not to create certainty, but to embed an overlay approach into the process. This draws incumbents and entrants together into positive-sum transactions (mediated by the FCC are party-to-party) where they cooperate to create new productive opportunities, sharing the gains.
The inspiration for the C-Band reallocation of satellite spectrum was bottom-up. As with so much of the radio spectrum, the band devoted to satellite distribution of video (relays to and from an array of broadcast and cable TV systems and networks) was old and tired. For decades, applications and systems were locked in by law. They consumed lots of bandwidth while ignoring the emergence of newer technologies like fiber optics (emphasis to underscore that products launched in the 1980s are still cutting-edge challenges for 2021 Spectrum Policy). Spying this mismatch, and seeking gains from trade, creative risk-takers petitioned the FCC.
In a mid-2017 request, computer chipmaker Intel and C-Band satellite carrier Intelsat (no corporate relationship) joined forces to ask for permission to expand the scope of satellite licenses. The proffered plan was for license holders to invest in spectrum economies by upgrading satellites and earth stations—magically creating new, unoccupied channels in prime mid-band frequencies perfect for highly valuable 5G services. All existing video transport services would continue, while society would enjoy way more advanced wireless broadband. All regulators had to do was allow “change of use” in existing licenses. Markets would do the rest: satellite operators would make efficient multi-billion-dollar investments, coordinating with each other and their customers, and then take bids from new users itching to access the prime 4 GHz spectrum. The transition to bold, new, more valuable applications would compensate legacy customers and service providers.
This “spectrum sharing” can spin gold – seizing on capitalist discovery and demand revelation in market bargains. Voila, the 21st century, delivered.
Well, yes and no. At first, the FCC filing was a yawner, the standard bureaucratic response. But this one took off took off when Chairman Pai—alertly, and in the public interest—embraced the proposal, putting it on the July 12, 2018 FCC meeting agenda. Intelsat’s market cap jumped from about $500 million to over $4.5 billion—the value of the spectrum it was using was worth far more than the service it was providing, and the prospect that it might realize some substantial fraction of the resource revaluation was visible evidence.[11]
While the Pai FCC leaned in the proper policy direction, politics soon blew the process down. Congress denounced the “private auction” as a “windfall,” bellowing against the unfairness of allowing corporations (some foreign-owned!) to cash out. The populist message was upside-down. The social damage created by mismanagement of spectrum—millions of Americans paying more and getting less from wireless than otherwise, robbing ordinary citizens of vast consumer surplus—was being fixed by entrepreneurial initiative. Moreover, the public gains (lower prices plus innovation externalities spun off from liberated bandwidth) was undoubtedly far greater than any rents captured by the incumbent licensees. And a great bonus to spur future progress: rewards for those parties initiating and securing efficiency-enhancing rights will unleash vastly more productive activity.
But the populist winds—gale force and bipartisan—spun the FCC.
It was legally correct that Intelsat and its rival satellite carriers did not own the spectrum allocated to the C-Band. Indeed, that was root of the problem. And here’s a fatal catch: in applying for broader spectrum property rights, they revealed a valuable discovery. The FCC, posing as referee, turned competitor and appropriated the proffered business plan on behalf of its client (the U.S. government), and then auctioned it to bidders. Regulators did tip the incumbents, whose help was still needed in reorganizing the C-Band, setting $3.3 billion as a fair price for “moving costs” (changing out technology to reduce their transmission footprints) and dangled another $9.7 billion in “incentive payments” not to dilly dally. In total, carriers have bid some $93.9 billion, or $1.02 per MHz-Pop.[12] This is 4.7 times the price paid for the Priority Access Licenses (PALs) allocated 70 MHz in Auction 105 earlier in 2020.
The TOTM assignment was not to evaluate Ajit Pai but to evaluate the Pai FCC and its spectrum policies. On that scale, great value was delivered by the Intel-Intelsat proposal, and the FCC’s alert endorsement, offset in some measure by the long-term losses that will likely flow from the dirigiste retreat to fossilized spectrum rights controlled by diktat.
Sharing Nicely
And that takes us to 2020’s Auction 105 (Citizens Broadband Radio Services, CBRS). The U.S. has lagged much of the world in allocating flexible-use spectrum rights in the 3.5 GHz band. Ireland auctioned rights to use 350 MHz in May 2017 and many countries did likewise between then and 2020, distributing far more than the 70 MHz allocated to the Priority Access Licenses (PALs); 150 MHz to 390 MHz is the range. The Pai FCC can plausibly assign the lag to “preexisting conditions.” Here, however, I will stress that the Pai FCC did not substantially further our understanding of the costs of “spectrum sharing” under coordinating devices imposed by the FCC.
All commercially valuable spectrum bands are shared. The most intensely shared, in the relevant economic sense, are those bands curated by mobile carriers. These frequencies are complemented by extensive network capital supplied by investors, and permit millions of users—including international roamers—to gain seamless connectivity. Unlicensed bands, alternatively, tend to separate users spatially, powering down devices to localize footprints. These limits work better in situations where users desire short transmissions, like a Bluetooth link from iPhone to headphone or when bits can be handed off to a wide area network by hopping 60 feet to a local “hot spot.” The application of “spectrum sharing” to imply a non-exclusive (or unlicensed) rights regime is, at best, highly misleading. Whenever conditions of scarcity exist, meaning that not all uses can be accommodated without conflict, some rationing follows. It is commonly done by price, behavioral restriction, or both.
In CBRS, the FCC has imposed three layers of “priority” access across the 3550-3700 MHz band. Certain government radars are assumed to be fixed and must be protected. When in use, these systems demand other wireless services stay silent on particular channels. Next in line are PAL owners, parties which have paid for exclusivity, but which are not guaranteed access to a given channel. These rights, which sold for about $4.5 billion, are allocated dynamically by a controller (a Spectrum Access System, or SAS). The radios and networks used automatically and continuously check in to obtain spectrum space permissions. Seven PALs, allocated 10 MHz each, have been assigned, 70 MHz in total. Finally, General Access Authorizations (GAA) are given without limit or exclusivity to radio devices across the 80 MHz remaining in the band plus any PALs not in use. Some 5G phones are already equipped to use such bands on an unlicensed basis.
We shall see how the U.S. system works in comparison to alternatives. What is important to note is that the particular form of “spectrum sharing” is neither necessary nor free. As is standard outside the U.S., exclusive rights analogous to CMRS licenses could have been auctioned here, with U.S. government radars given vested rights.
One point that is routinely missed is that the decision to have the U.S. government partition the rights in three layers immediately conceded that U.S. government priority applications (for radar) would never shift. That is asserted as though it is a proposition that needs no justification, but it is precisely the sort of impediment to efficiency that has plagued spectrum reallocations for decades. It was, for instance, the 2002 assumption behind TV “white spaces”—that 402 MHz of TV Band frequencies was fixed in place, that the unused channels could never be repackaged and sold as exclusive rights and diverted to higher-valued uses. That unexamined assertion was boldly run then, as seen in the reduction of the band from 402 MHz to 235 MHz following Auctions 73 (2008) and 1001/1002 (2016-17), as well as in the clear possibility that remaining TV broadcasts could today be entirely transferred to cable, satellite, and OTT broadband (as they have already, effectively, been). The problem in CBRS is that the rights now distributed for the 80 MHz of unlicensed, with its protections of certain priority services, does not sprinkle the proper rights into the market such that positive-sum transitions can be negotiated. We’re stuck with whatever inefficiencies this “preexisting condition” of the 3.5 GHz might endow, unless another decadelong FCC spectrum allocation can move things forward.[13]
Already visible is that the rights sold as PALs in CBRS are only about 20% of the value of rights sold in the C-Band. This differential reflects the power restrictions and overhead costs embedded in the FCC’s sharing rules for CBRS (involving dynamic allocation of the exclusive access rights conveyed in PALs) but avoided in C-Band. In the latter, the sharing arrangements are delegated to the licensees. Their owners reveal that they see these rights as more productive, with opportunities to host more services.
There should be greater recognition of the relevant trade-offs in imposing coexistence rules. Yet, the Pai FCC succumbed in 5.9 GHz and in the 6 GHz bands to the tried-and-true options of Regulation Past. This was hugely ironic in the former, where the FCC had in 1999 imposed unlicensed access under rules that favored specific automotive informatics—Dedicated Short-Range Communications (DSRC)—that proved a 20-year bust. In diagnosing this policy blunder, the FCC then repeated it, splitting off a 45 MHz band with Wi-Fi-friendly unlicensed rules, and leaving 30 MHz to continue as the 1999 set-aside for DSRC. A liberalization of rights that would have allowed for a “private auction” to change the use of the band would have been the preferred approach. Instead, we are left with a partition of the band into rival rule regimes again established by administrative fiat.
This approach was then again imposed in the large 1.2 GHz unlicensed allocation surrounding 6 GHz, making a big 2020 splash. The FCC here assumed, categorically, that unlicensed rules are the best way to sponsor spectrum coordination. It ignores the costs of that coordination. And the commission appears to forget the progress it has made with innovative policy solutions, pulling in market forces through “overlay” licenses. These useful devices were used, in one form or another, to reallocate spectrum in for 2G in Auction 4, AWS in Auction 66, millimeter bands in Auctions 102 and 103, the “TV Incentive Auction,” the satellite C-Band in Auction 107, and have recently appeared as star players in the January 2021 FCC plan to rationalize the complex mix of rights scattered around the 2.5 GHz band.[14] Too complicated for administrators to figure out, it could be transactionally more efficient to let market competitors figure this out.
The Future
The re-allocations in 5.9 GHz and the 6 GHz bands may yet host productive services. One can hope. But how will regulators know that the options allowed, and taken, are superior to what alternatives—suppressed by law for the next five, 10, 20 years—might have emerged had competitors had the right to test business models or technologies disfavored by the regulators best laid plans. That is the thinking that locked in the TV band, the C-Band for Satellites, and the ITS Band. It’s what we learned to be problematic throughout the Political Radio Spectrum. We shall see, as Chairman Pai speculated, what future chapters these decisions leave for future editions.
[3] Thomas Hazlett, Economic Analysis at the Federal Communications Commission: A Simple Proposal to Atone for Past Sins, Resources for the Future Discussion Paper 11-23(May 2011);David Honig, FCC Reorganization: How Replacing Silos with Functional Organization Would Advance Civil Rights, 3 University of Pennsylvania Journal of Law and Public Affairs 18 (Aug. 2018).
[4] It is with great sadness that Jerry Ellig, the 2017-18 FCC Chief Economist who might well offer the most careful analysis of such a structural reform, will not be available for the task – one which he had already begun, writing this recent essay with two other FCC Chief Economists: Babette Boliek, Jerry Ellig and Jeff Prince, Improved economic analysis should be lasting part of Pai’s FCC legacy, The Hill (Dec. 29, 2020). Jerry’s sudden passing, on January 21, 2021, is a deep tragedy. Our family weeps for his wonderful wife, Sandy, and his precious daughter, Kat.
[6] In 2018-19, FCC Auctions 101 and 102 offered licenses allocated 1,550 MHz of bandwidth in the 24 GHz and 28 GHz bands, although some of the bandwidth had previously been assigned and post-auction confusion over interference with adjacent frequency uses (in 24 GHz) has impeded some deployments. In 2020, Auction 103 allowed competitive bidding for licenses to use 37, 39, and 47 GHz frequencies, 3400 MHz in aggregate. Net proceeds to the FCC in 101, 102 and 103 were: $700.3 million, $2.02 billion, and $7.56 billion, respectively.
[7] I estimate that some 70 MHz of unlicensed bandwidth, allocated for television white space devices, was reduced pursuant to the Incentive Auction in 2017. This, however, was baked into spectrum policy prior to the Pai FCC.
[8] Notably, 64-71 GHz was allocated for unlicensed radio operations in the Spectrum Frontiers proceeding, adjacent to the 57-64 GHz unlicensed bands. See Use of Spectrum Bands Above 24 GHz For Mobile Radio Services, et al., Report and Order and Further Notice of Proposed Rulemaking, 31 FCC Rcd 8014 (2016), 8064-65, para. 130.
[9] The revenues reflect bids made in the Clock phase of Auction 107. An Assignment Phase has yet to occur as of this writing.
[10] The 2021 FCC Budget request, p. 34: “As of December 2019, the total amount collected for broader government use and deficit reduction since 1994 exceeds $117 billion.”
[11] Kerrisdale Management issued a June 2018 report that tied the proceeding to a dubious source: “to the market-oriented perspective on spectrum regulation – as articulated, for instance, by the recently published book The Political Spectrum by former FCC chief economist Thomas Winslow Hazlett – [that] the original sin of the FCC was attempting to dictate from on high what licensees should or shouldn’t do with their spectrum. By locking certain bands into certain uses, with no simple mechanism for change or renegotiation, the agency guaranteed that, as soon as technological and commercial realities shifted – as they do constantly – spectrum use would become inefficient.”
[12] Net proceeds will be reduced to reflect bidding credits extended small businesses, but additional bids will be received in the Assignment Phase of Auction 107, still to be held. Likely totals will remain somewhere around current levels.
[13] The CBRS band is composed of frequencies at 3550-3700 MHz. The top 50 MHz of that band was officially allocated in 2005 in a proceeding that started years earlier. It was then curious that the adjacent 100 MHz was not included.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Thomas B. Nachbar is a professor of law at the University of Virginia School of Law and a senior fellow at the Center for National Security Law.]
It would be impossible to describe Ajit Pai’s tenure as chair of the Federal Communications Commission as ordinary. Whether or not you thought his regulatory style or his policies were innovative, his relationship with the public has been singular for an FCC chair. His Reese’s mug, alone, has occupied more space in the American media landscape than practically any past FCC chair. From his first day, he has attracted consistent, highly visible criticism from a variety of media outlets, although at least John Oliver didn’t describe him as a dingo. Just today, I read that Ajit Pai single handedly ruined the internet, which when I got up this morning seemed to be working pretty much the same way it was four years ago.
I might be biased in my view of Ajit. I’ve known him since we were law school classmates, when he displayed the same zeal and good-humored delight in confronting hard problems that I’ve seen in him at the commission. So I offer my comments not as an academic and student of FCC regulation, but rather as an observer of the communications regulatory ecosystem that Ajit has dominated since his appointment. And while I do not agree with everything he’s done at the commission, I have admired his single-minded determination to pursue policies that he believes will expand access to advanced telecommunications services. One can disagree with how he’s pursued that goal—and many have—but characterizing his time as chair in any other way simply misses the point. Ajit has kept his eye on expanding access, and he has been unwavering in pursuit of that objective, even when doing so has opened him to criticism, which is the definition of taking political risk.
The decision to include SpaceX is at one level unremarkable. SpaceX proposes to offer broadband internet access through low-Earth-orbit satellites, which is the kind of thing that is completely amazing but is becoming increasingly un-amazing as communications technology advances. SpaceX’s decision to use satellites is particularly valuable for initiatives like the RDOF, which specifically seek to provide services where previous (largely terrestrial) services have not. That is, in fact, the whole point of the RDOF, a point that sparked fiery debate over the FCC’s decision to focus the first phase of the RDOF on areas with no service rather than areas with some service. Indeed, if anything typifies the current tenor of the debate (at the center of which Ajit Pai has resided since his confirmation as chair), it is that a policy decision over which kind of under-served areas should receive more than $16 billion in federal funding should spark such strongly held views. In the end, SpaceX was awarded $885.5 million to participate in the RDOF, almost 10% of the first-round funds awarded.
But on a different level, the decision to include SpaceX is extremely remarkable. Elon Musk, SpaceX’s pot-smoking CEO, does not exactly fit regulatory stereotypes. (Disclaimer: I personally trust Elon Musk enough to drive my children around in one of his cars.) Even more significantly, SpaceX’s Starlink broadband service doesn’t actually exist as a commercial product. If you go to Starlink’s website, you won’t find a set of splashy webpages featuring products, services, testimonials, and a variety of service plans eager for a monthly assignation with your credit card or bank account. You will be greeted with a page asking for your email and service address in case you’d like to participate in Starlink’s beta program. In the case of my address, which is approximately 100 miles from the building where the FCC awarded SpaceX over $885 million to participate in the RDOF, Starlink is not yet available. I will, however, “be notified via email when service becomes available in your area,” which is reassuring but doesn’t get me any closer to watching cat videos.
That is perhaps why Chairman Pai was initially opposed to including SpaceX in the low-latency portion of the RDOF. SpaceX was offering unproven technology and previous satellite offerings had been high-latency, which is good for some uses but not others.
But then, an even more remarkable thing happened, at least in Washington: a regulator at the center of a controversial issue changed his mind and—even more remarkably—admitted his decision might not work out. When the final order was released, SpaceX was allowed to bid for low-latency RDOF funds even though the commission was “skeptical” of SpaceX’s ability to deliver on its low-latency promise. Many doubted that SpaceX would be able to effectively compete for funds, but as we now know, that decision led to SpaceX receiving a large share of the Phase I funds. Of course, that means that if SpaceX doesn’t deliver on its latency promises, a substantial part of the RDOF Phase I funds will fail to achieve their purpose, and the FCC will have backed the wrong horse.
I think we are unlikely to see such regulatory risk-taking, both technically and politically, in what will almost certainly be a more politically attuned commission in the coming years. Even less likely will be acknowledgments of uncertainty in the commission’s policies. Given the political climate and the popular attention policies like network neutrality have attracted, I would expect the next chair’s views about topics like network neutrality to exhibit more unwavering certainty than curiosity and more resolve than risk-taking. The most defining characteristic of modern communications technology and markets is change. We are all better off with a commission in which the other things that can change are minds.