Archives For administrative

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Justin “Gus” Hurwitz is associate professor of law, the Menard Director of the Nebraska Governance and Technology Center, and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska College of Law. He is also director of law & economics programs at the International Center for Law & Economics.]

I was having a conversation recently with a fellow denizen of rural America, discussing how to create opportunities for academics studying the digital divide to get on-the-ground experience with the realities of rural telecommunications. He recounted a story from a telecom policy event in Washington, D.C., from not long ago. The story featured a couple of well-known participants in federal telecom policy as they were talking about how to close the rural digital divide. The punchline of the story was loud speculation from someone in attendance that neither of these bloviating telecom experts had likely ever set foot in a rural town.

And thus it is with most of those who debate and make telecom policy. The technical and business challenges of connecting rural America are different. Rural America needs different things out of its infrastructure than urban America. And the attitudes of both users and those providing service are different here than they are in urban America.

Federal Communications Commission Chairman Aji Pai—as I get to refer to him in writing for perhaps the last time—gets this. As is well-known, he is a native Kansan. He likely spent more time during his time as chairman driving rural roads than this predecessor spent hobnobbing at political fundraisers. I had the opportunity on one of these trips to visit a Nebraska farm with him. He was constantly running a bit behind schedule on this trip. I can attest that this is because he would wander off with a farmer to look at a combine or talk about how they were using drones to survey their fields. And for those cynics out there—I know there are some who don’t believe in the chairman’s interest in rural America—I can tell you that it meant a lot to those on the ground who had the chance to share their experiences.

Rural Digital Divide Policy on the Ground

Closing the rural digital divide is a defining public-policy challenge of telecommunications. It’s right there in the first sentence of the Communications Act, which established the FCC:

For the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States…a rapid, efficient, Nation-wide, and world-wide wire and radio communication service[.]

Depending on how one defines broadband internet, somewhere between 18 and 35 million Americans lack broadband internet access. No matter how you define it, however, most of those lacking access are in rural America.

It’s unsurprising why this is the case. Looking at North Dakota, South Dakota, and Nebraska—three of the five most expensive states to connect each household in both the 2015 and 2018 Connect America Fund models—the cost to connect a household to the internet in these states was twice that of connecting a household in the rest of the United States. Given the low density of households in these areas, often less than one household per square mile, there are relatively fewer economies of scale that allow carriers to amortize these costs across multiple households. We can add that much of rural America is both less wealthy than more urban areas and often doesn’t value the benefits of high-speed internet as highly. Taken together, the cost of providing service in these areas is much higher, and the demand for them much less, than in more urban areas.

On the flip side are the carriers and communities working to provide access. The reality in these states is that connecting those who live here is an all-hands-on-deck exercise. I came to Nebraska with the understanding that cable companies offer internet service via cable and telephone companies offer internet service via DSL or fiber. You can imagine my surprise the first time I spoke to a carrier who was using a mix of cable, DSL, fiber, microwave, and Wi-Fi to offer service to a few hundred customers. And you can also imagine my surprise when he started offering advice to another carrier—ostensibly a competitor—about how to get more performance out of some older equipment. Just last week, I was talking to a mid-size carrier about how they are using fixed wireless to offer service to customers outside of their service area as a stopgap until fiber gets out to the customer’s house.

Pai’s Progress Closing the Rural Digital Divide

This brings us to Chairman Pai’s work to close the rural digital divide. Literally on his first day on the job, he announced that his top priority was closing the digital divide. And he backed this up both with the commission’s agenda and his own time and attention.

On Chairman Pai’s watch, the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity. The recently completed RDOF auction promises to connect 10 million rural Americans to the internet; the 5G Fund will ensure that all but the most difficult-to-connect areas of the country will be covered by 5G mobile wireless. These are top-line items on Commissioner Pai’s resume as chairman. But it is important to recognize how much of a break they were from the commission’s previous approach to universal service and the digital divide. These funding mechanisms are best characterized by their technology-neutral, reverse-auction based approach to supporting service deployment.

This is starkly different from prior generations of funding, which focused on subsidizing specific carriers to provide specific levels of service using specific technologies. As I said above, the reality on the ground in rural America is that closing the digital divide is an all-hands-on-deck exercise. It doesn’t matter who is offering service or what technology they are using. Offering 10 mbps service today over a rusty barbed wire fence or a fixed wireless antenna hanging off the branch of a tree is better than offering no service or promising fiber that’s going to take two years to get into the ground. And every dollar saved by connecting one house with a lower-cost technology is a dollar that can be used to connect another house that may otherwise have gone unconnected.

The combination of the reverse-auction and technology-neutral approaches has made it possible for the commission to secure commitments to connect a record number of houses with high-speed internet over an incredibly short period of time.

Then there are the chairman’s accomplishments on the spectrum and wirelessinternet fronts. Here, he faced resistance from both within the government and industry. In some of the more absurd episodes of government in-fighting, he tangled with protectionist interests within the government to free up CBRS and other mid-band spectrum and to authorize new satellite applications. His support of fixed and satellite wireless has the potential to legitimately shake up the telecom industry. I honestly have no idea whether this is going to prove to be a good or bad bet in the long term—whether fixed wireless is going to be able to offer the quality and speed of service its proponents promise or whether it instead will be a short-run misallocation of capital that will require clawbacks and re-awards of funding in another few years—but the embrace of the technology demonstrated decisive leadership and thawed a too limited and ossified understanding of what technologies could be used to offer service. Again, as said above, closing the rural digital divide is an all-hands-on-deck problem; we do ourselves no favors by excluding possible solutions from our attempts to address it.

There is more that the commission did under Chairman Pai’s leadership, beyond the commission’s obvious order and actions, to close the rural digital divide. Over the past two years, I have had opportunities to work with academic colleagues from other disciplines on a range of federal funding opportunities for research and development relating to next generation technologies to support rural telecommunications, such as programs through the National Science Foundation. It has been wonderful to see increased FCC involvement in these programs. And similarly, another of Chairman Pai’s early initiatives was to establish the Broadband Deployment Advisory Committee. It has been rare over the past few years for me to be in a meeting with rural stakeholders that didn’t also include at least one member of a BDAC subcommittee. The BDAC process was a valuable way to communicate information up the chair, to make sure that rural stakeholders’ voices were heard in D.C.

But the BDAC process had another important effect: it made clear that there was someone in D.C. who was listening. Commissioner Pai said on his first day as chairman that closing the digital divide was his top priority. That’s easy to just say. But establishing a committee framework that ensures that stakeholders regularly engage with an appointed representative of the FCC, putting in the time and miles to linger with a farmer to talk about the upcoming harvest season, these things make that priority real.

Rural America certainly hopes that the next chair of the commission will continue to pay us as much attention as Chairman Pai did. But even if they don’t, we can rest with some comfort that he has set in motion efforts—from the next generation of universal service programs to supporting research that will help develop the technologies that will come after—that will serve us will for years to come.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Brent Skorup is a senior research fellow at the Mercatus Center at George Mason University.]

Ajit Pai came into the Federal Communications Commission chairmanship with a single priority: to improve the coverage, cost, and competitiveness of U.S. broadband for the benefit of consumers. The 5G Fast Plan, the formation of the Broadband Deployment Advisory Committee, the large spectrum auctions, and other broadband infrastructure initiatives over the past four years have resulted in accelerated buildouts and higher-quality services. Millions more Americans have gotten connected because of agency action and industry investment.

That brings us to Chairman Pai’s most important action: restoring the deregulatory stance of the FCC toward broadband services and repealing the Title II “net neutrality” rules in 2018. Had he not done this, his and future FCCs would have been bogged down in inscrutable, never-ending net neutrality debates, reminiscent of the Fairness Doctrine disputes that consumed the agency 50 years ago. By doing that, he cleared the decks for the pro-deployment policies that followed and redirected the agency away from its roots in mass-media policy toward a future where the agency’s primary responsibilities are encouraging broadband deployment and adoption.

It took tremendous courage from Chairman Pai and Commissioners Michael O’Rielly and Brendan Carr to vote to repeal the 2015 Title II regulations, though they probably weren’t prepared for the public reaction to a seemingly arcane dispute over regulatory classification. The hysteria ginned up by net-neutrality advocates, members of Congress, celebrities, and too-credulous journalists was unlike anything I’ve seen in political advocacy. Advocates, of course, don’t intend to provoke disturbed individuals but the irresponsible predictions of “the end of the internet as we know it” and widespread internet service provider (ISP) content blocking drove one man to call in a bomb threat to the FCC, clearing the building in a desperate attempt to delay or derail the FCC’s Title II repeal. At least two other men pleaded guilty to federal charges after issuing vicious death threats to Chairman Pai, a New York congressman, and their families in the run-up to the regulation’s repeal. No public official should have to face anything resembling that over a policy dispute.

For all the furor, net-neutrality advocates promised a neutral internet that never was and never will be. ”Happy little bunny rabbit dreams” is how David Clark of MIT, an early chief protocol architect of the internet, derided the idea of treating all online traffic the same. Relatedly, the no-blocking rule—the sine na qua of net neutrality—was always a legally dubious requirement. Legal scholars for years had called into doubt the constitutionality of imposing must-carry requirements on ISPs. Unsurprisingly, a federal appellate judge pressed this point in oral arguments defending the net neutrality rules in 2016. The Obama FCC attorney conceded without a fight; even after the net neutrality order, ISPs were “absolutely” free to curate the internet.

Chairman Pai recognized that the fight wasn’t about website blocking and it wasn’t, strictly speaking, about net neutrality. This was the latest front in the long battle over whether the FCC should strictly regulate mass-media distribution. There is a long tradition of progressive distrust of new (unregulated) media. The media access movement that pushed for broadcast TV and radio and cable regulations from the 1960s to 1980s never went away, but the terminology has changed: disinformation, net neutrality, hate speech, gatekeeper.

The decline in power of regulated media—broadcast radio and TV—and the rising power of unregulated internet-based media—social media, Netflix, and podcasts—meant that the FCC and Congress had few ways to shape American news and media consumption. In the words of Tim Wu, the law professor who coined the term “net neutrality,” the internet rules are about giving the agency the continuing ability to shape “media policy, social policy, oversight of the political process, [and] issues of free speech.”

Title II was the only tool available to bring this powerful new media—broadband access—under intense regulatory scrutiny by regulators and the political class. As net-neutrality advocate and Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition. Net neutrality would draw the agency into contentious mass-media regulation once again, distracting it from universal service efforts, spectrum access and auctions, and cleaning up the regulatory detritus that had slowly accumulated since the passage of the agency’s guiding statutes: the 1934 Communications Act and the 1996 Telecommunications Act.

There are probably items that Chairman Pai wish he’d finished or had done slightly differently. He’s left a proud legacy, however, and his politically risky decision to repeal the Title II rules redirected agency energies away from no-win net-neutrality battles and toward broadband deployment and infrastructure. Great progress was made and one hopes the Biden FCC chairperson will continue that trajectory that Pai set.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Seth L. Cooper is director of policy studies and a senior fellow at the Free State Foundation.]

During Chairman Ajit Pai’s tenure, the Federal Communications Commission adopted key reforms that improved the agency’s processes. No less important than process reform is process integrity. The commission’s L-Band Order and the process that produced it will be the focus here. In that proceeding, Chairman Pai led a careful and deliberative process that resulted in a clearly reasoned and substantively supportable decision to put unused valuable L-Band spectrum into commercial use for wireless services.

Thanks to one of Chairman Pai’s most successful process reforms, the FCC now publicly posts draft items to be voted on three weeks in advance of the commission’s public meetings. During his chairmanship, the commission adopted reforms to help expedite the regulatory-adjudication process by specifying deadlines and facilitating written administrative law judge (ALJ) decisions rather than in-person hearings. The “Team Telecom” process also was reformed to promote faster agency determinations on matters involving foreign ownership.

Along with his process-reform achievements, Chairman Pai deserves credit for ensuring that the FCC’s proceedings were conducted in a lawful and sound manner. For example, the commission’s courtroom track record was notably better during Chairman Pai’s tenure than during the tenures of his immediate predecessors. Moreover, Chairman Pai deserves high marks for the agency process that preceded the L-Band Order – a process that was perhaps subject to more scrutiny than the process of any other proceeding during his chairmanship. The public record supports the integrity of that process, as well as the order’s merits.

In April 2020, the FCC unanimously approved an order authorizing Ligado Networks to deploy a next-generation mixed mobile-satellite network using licensed spectrum in the L-Band. This action is critical to alleviating the shortage of commercial spectrum in the United States and to ensuring our nation’s economic competitiveness. Ligado’s proposed network will provide industrial Internet-of-Things (IoT) services, and its L-Band spectrum has been identified as capable of pairing with C-Band and other mid-band spectrum for delivering future 5G services. According to the L-Band Order, Ligado plans to invest up to $800 million in network capabilities, which could create over 8,000 jobs. Economist Coleman Bazelon estimated that Ligado’s network could help create up to 3 million jobs and contribute up to $500 billion to the U.S. economy.

Opponents of the L-Band Order have claimed that Ligado’s proposed network would create signal interference with GPS services in adjacent spectrum. Moreover, in attempts to delay or undo implementation of the L-Band Order, several opponents lodged harsh but baseless attacks against the FCC’s process. Some of those process criticisms were made at a May 2020 Senate Armed Services Committee hearing that failed to include any Ligado representatives or any FCC commissioners for their viewpoints. And in a May 2020 floor speech, Sen. James Inhofe (R-Okla.) repeatedly criticized the commission’s process as sudden, hurried, and taking place “in the darkness of a weekend.”

But those process criticisms fail in the face of easily verifiable facts. Under Chairman Pai’s leadership, the FCC acted within its conceded authority, consistent with its lawful procedures, and with careful—even lengthy—deliberation.

The FCC’s proceeding concerning Ligado’s license applications dates back to 2011. It included public notice and comment periods in 2016 and 2018. An August 2019 National Telecommunications and Information Administration (NTIA) report noted the commission’s forthcoming decision. In the fall of 2019, the commission shared a draft of its order with NTIA. Publicly stated opposition to Ligado’s proposed network by GPS operators and Defense Secretary Mark Esper, as well as publicly stated support for the network by Attorney General William Barr and Secretary of State Mike Pompeo, ensured that the proceeding received ongoing attention. Claims of “surprise” when the commission finalized its order in April 2020 are impossible to credit.

Importantly, the result of the deliberative agency process helmed by Chairman Pai was a substantively supportable decision. The FCC applied its experience in adjudicating competing technical claims to make commercial spectrum policy decisions. It was persuaded in part by signal testing conducted by the National Advanced Spectrum and Communications Test Network, as well as testing by technology consultants Roberson and Associates. By contrast, the commission found unpersuasive reports of alleged signal interference involving military devices operating outside of their assigned spectrum band.

The FCC also applied its expertise in addressing potential harmful signal interference to incumbent operations in adjacent spectrum bands by imposing several conditions on Ligado’s operations. For example, the L-Band Order requires Ligado to adhere to its agreements with major GPS equipment manufacturers for resolving signal interference concerns. Ligado must dedicate 23 megahertz of its own licensed spectrum as a guard-band from neighboring spectrum and also reduce its base station power levels 99% compared to what Ligado proposed in 2015. The commission requires Ligado to expeditiously replace or repair any U.S. government GPS devices that experience harmful interference from its network. And Ligado must maintain “stop buzzer” capability to halt its network within 15 minutes of any request by the commission.

From a process standpoint, the L-Band Order is a commendable example of Chairman Pai’s perseverance in leading the FCC to a much-needed decision on an economically momentous matter in the face of conflicting government agency and market provider viewpoints. Following a careful and deliberative process, the commission persevered to make a decision that is amply supported by the record and poised to benefit America’s economic welfare.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Mark Jamison is the Gerald L. Gunter Memorial Professor and director of the Public Utility Research Center at the University of Florida’s Warrington College of Business. He’s also a visiting scholar at the American Enterprise Institute.]

Chairman Ajit Pai will be remembered as one of the most consequential Federal Communications Commission chairmen in history. His policy accomplishments are numerous, including the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls.

Less will be said about the important work he has done rebuilding the FCC’s independence. It is rare for a new FCC chairman to devote resources to building the institution. Most focus on their policy agendas, because policies and regulations make up their legacies that the media notices, and because time and resources are limited. Chairman Pai did what few have even attempted to do: both build the organization and make significant regulatory reforms.

Independence is the ability of a regulatory institution to operate at arm’s length from the special interests of industry, politicians, and the like. The pressures to bias actions to benefit favored stakeholders can be tremendous; the FCC greatly influences who gets how much of the billions of dollars that are at stake in FCC decisions. But resisting those pressures is critical because investment and services suffer when a weak FCC is directed by political winds or industry pressures rather than law and hard analysis.

Chairman Pai inherited a politicized FCC. Research by Scott Wallsten showed that commission votes had been unusually partisan under the previous chairman (November 2013 through January 2017). From the beginning of Reed Hundt’s term as chairman until November 2013, only 4% of commission votes had divided along party lines. By contrast, 26% of votes divided along party lines from November 2013 until Chairman Pai took over. This division was also reflected in a sharp decline in unanimous votes under the previous administration. Only 47% of FCC votes on orders were unanimous, as opposed to an average of 60% from Hundt through the brief term of Mignon Clyburn.

Chairman Pai and his fellow commissioners worked to heal this divide. According to the FCC’s data, under Chairman Pai, over 80% of items on the monthly meeting agenda had bipartisan support and over 70% were adopted without dissent. This was hard, as Democrats in general were deeply against President Donald Trump and some members of Congress found a divided FCC convenient.

The political orientation of the FCC prior to Chairman Pai was made clear in the management of controversial issues. The agency’s work on net neutrality in 2015 pivoted strongly toward heavy regulation when President Barack Obama released his video supporting Title II regulation of the internet. And there is evidence that the net-neutrality decision was made in the White House, not at the FCC. Agency economists were cut out of internal discussions once the political decision had been made to side with the president, causing the FCC’s chief economist to quip that the decision was an economics-free zone.

On other issues, a vote on Lifeline was delayed several hours so that people on Capitol Hill could lobby a Democratic commissioner to align with fellow Democrats and against the Republican commissioners. And an initiative to regulate set-top boxes was buoyed, not by analyses by FCC staff, but by faulty data and analyses from Democratic senators.

Chairman Pai recognized the danger of politically driven decision-making and noted that it was enabled in part by the agency’s lack of a champion for economic analyses. To remedy this situation, Chairman Pai proposed forming an Office of Economics and Analytics (OEA). The commission adopted his proposal, but unfortunately it was with one of the rare party-line votes. Hopefully, Democratic commissioners have learned the value of the OEA.

The OEA has several responsibilities, but those most closely aligned with supporting the agency’s independence are that it: (a) provides economic analysis, including cost-benefit analysis, for commission actions; (b) develops policies and strategies on data resources and best practices for data use; and (c) conducts long-term research. The work of the OEA makes it hard for a politically driven chairman to pretend that his or her initiatives are somehow substantive.

Another institutional weakness at the FCC was a lack of transparency. Prior to Chairman Pai, the public was not allowed to view the text of commission decisions until after they were adopted. Even worse, sometimes the text that the commissioners saw when voting was not the text in the final decision. Wallsten described in his research a situation where the meaning of a vote actually changed from the time of the vote to the release of the text:

On February 9, 2011 the Federal Communications Commission (FCC) released a proposed rule that included, among many other provisions, capping the Universal Service Fund at $4.5 billion. The FCC voted to approve a final order on October 27, 2011. But when the order was finally released on November 18, 2011, the $4.5 billion ceiling had effectively become a floor, with the order requiring the agency to forever estimate demand at no less than $4.5 billion. Because payments from the fund had been decreasing steadily, this floor means that the FCC is now collecting hundreds of billions of dollars more in taxes than it is spending on the program. [footnotes omitted]

The lack of transparency led many to not trust the FCC and encouraged stakeholders with inside access to bypass the legitimate public process for lobbying the agency. This would have encouraged corruption had not Chairman Pai changed the system. He required that decision texts be released to the public at the same time they were released to commissioners. This allows the public to see what the commissioners are voting on. And it ensures that orders do not change after they are voted on.

The FCC demonstrated its independence under Chairman Pai. In the case of net neutrality, the three Republican commissioners withstood personal threats, mocking from congressional Democrats, and pressure from Big Tech to restore light-handed regulation. About a year later, Chairman Pai was strongly criticized by President Trump for rejecting the Sinclair-Tribune merger. And despite the president’s support of the merger, he apparently had sufficient respect for the FCC’s independence that the White House never contacted the FCC about the issue. In the case of Ligado Networks’ use of its radio spectrum license, the FCC stood up to intense pressure from the U.S. Department of Defense and from members of Congress who wanted to substitute their technical judgement for the FCC’s research on the impacts of Ligado’s proposal.

It is possible that a new FCC could undo this new independence. Commissioners could marginalize their economists, take their directions from partisans, and reintroduce the practice of hiding information from the public. But Chairman Pai foresaw this and carefully made his changes part of the institutional structure of the FCC, making any steps backward visible to all concerned.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Harold Feld is senior vice president of Public Knowledge.]

Chairman Ajit Pai prioritized making new spectrum available for 5G. To his credit, he succeeded. Over the course of four years, Chairman Pai made available more high-band and mid-band spectrum, for licensed use and unlicensed use, than any other Federal Communications Commission chairman. He did so in the face of unprecedented opposition from other federal agencies, navigating the chaotic currents of the Trump administration with political acumen and courage. The Pai FCC will go down in history as the 5G FCC, and as the chairman who protected the primacy of FCC control over commercial spectrum policy.

At the same time, the Pai FCC will also go down in history as the most conventional FCC on spectrum policy in the modern era. Chairman Pai undertook no sweeping review of spectrum policy in the manner of former Chairman Michael Powell and no introduction of new and radically different spectrum technologies such as the introduction of unlicensed spectrum and spread spectrum in the 1980s, or the introduction of auctions in the 1990s. To the contrary, Chairman Pai actually rolled back the experimental short-term license structure adopted in the 3.5 GHz Citizens Broadband Radio Service (CBRS) band and replaced it with the conventional long-term with renewal expectation license. He missed a once-in-a-lifetime opportunity to dramatically expand the availability of unlicensed use of the TV white spaces (TVWS) via repacking after the television incentive auction. In reworking the rules for the 2.5 GHz band, although Pai laudably embraced the recommendation to create an application window for rural tribal lands, he rejected the proposal to allow nonprofits a chance to use the band for broadband in favor of conventional auction policy.

Ajit Pai’s Spectrum Policy Gave the US a Strong Position for 5G and Wi-Fi 6

To fully appreciate Chairman Pai’s accomplishments, we must first fully appreciate the urgency of opening new spectrum, and the challenges Pai faced from within the Trump administration itself. While providers can (and should) repurpose spectrum from older technologies to newer technologies, successful widespread deployment can only take place when sufficient amounts of new spectrum become available. This “green field” spectrum allows providers to build out new technologies with the most up-to-date equipment without disrupting existing subscriber services. The protocols developed for mobile 5G services work best with “mid-band” spectrum (generally considered to be frequencies between 2 GHz and 6 GHz). At the time Pai became chairman, the FCC did not have any mid-band spectrum identified for auction.

In addition, spectrum available for unlicensed use has become increasingly congested as more and more services depend on Wi-Fi and other unlicensed applications. Indeed, we have become so dependent on Wi-Fi for home broadband and networking that people routinely talk about buying “Wi-Fi” from commercial broadband providers rather than buying “internet access.” The United States further suffered a serious disadvantage moving forward to next generation Wi-Fi, Wi-Fi 6, because the U.S. lacked a contiguous block of spectrum large enough to take advantage of Wi-Fi 6’s gigabit capabilities. Without gigabit Wi-Fi, Americans will increasingly be unable to use the applications that gigabit broadband to the home makes possible.

But virtually all spectrum—particularly mid-band spectrum—have significant incumbents. These incumbents include federal users, particularly the U.S. Department of Defense. Finding new spectrum optimal for 5G required reclaiming spectrum from these incumbents. Unlicensed services do not require relocating incumbent users but creating such “underlay” unlicensed spectrum access requires rules to prevent unlicensed operations from causing harmful interference to licensed services. Needless to say, incumbent services fiercely resist any change in spectrum-allocation rules, claiming that reducing their spectrum allocation or permitting unlicensed services will compromise valuable existing services, while simultaneously causing harmful interference.

The need to reallocate unprecedented amounts of spectrum to ensure successful 5G and Wi-Fi 6 deployment in the United States created an unholy alliance of powerful incumbents, commercial and federal, dedicated to blocking FCC action. Federal agencies—in violation of established federal spectrum policy—publicly challenged the FCC’s spectrum-allocation decisions. Powerful industry incumbents—such as the auto industry, the power industry, and defense contractors—aggressively lobbied Congress to reverse the FCC’s spectrum action by legislation. The National Telecommunications and Information Agency (NTIA), the federal agency tasked with formulating federal spectrum policy, was missing in action as it rotated among different acting agency heads. As the chair and ranking member of the House Commerce Committee noted, this unprecedented and very public opposition by federal agencies to FCC spectrum policy threatened U.S. wireless interests both domestically and internationally.

Navigating this hostile terrain required Pai to exercise both political acumen and political will. Pai accomplished his goal of reallocating 600 MHz of spectrum for auction, opening over 1200 MHz of contiguous spectrum for unlicensed use, and authorized the new entrant Ligado Networks over the objections of the DOD. He did so by a combination of persuading President Donald Trump of the importance of maintaining U.S. leadership in 5G, and insisting on impeccable analysis by the FCC’s engineers to provide support for the reallocation and underlay decisions. On the most significant votes, Pai secured support (or partial support) from the Democrats. Perhaps most importantly, Pai successfully defended the institutional role of the FCC as the ultimate decisionmaker on commercial spectrum use, not subject to a “heckler’s veto” by other federal agencies.

Missed Innovation, ‘Command and Control Lite

While acknowledging Pai’s accomplishments, a fair consideration of Pai’s legacy must also consider his shortcomings. As chairman, Pai proved the most conservative FCC chair on spectrum policy since the 1980s. The Reagan FCC produced unlicensed and spread spectrum rules. The Clinton FCC created the spectrum auction regime. The Bush FCC included a spectrum task force and produced the concept of database management for unlicensed services, creating the TVWS and laying the groundwork for CBRS in the 3.5 GHz band. The Obama FCC recommended and created the world’s first incentive auction.

The Trump FCC does more than lack comparable accomplishments; it actively rolled back previous innovations. Within the first year of his chairmanship, Pai began a rulemaking designed to roll back the innovative priority access licensing (PALs). Under the rules adopted under the previous chairman, PALs provided exclusive use on a census block basis for three years with no expectation of renewal. Pai delayed the rollout of CBRS for two years to replace this approach with a standard license structure of 10 years with an expectation of renewal, explicitly to facilitate traditional carrier investment in traditional networks. Pai followed the same path when restructuring the 2.5 GHz band. While laudably creating a window for Native Americans to apply for 2.5 GHz licenses on rural tribal lands, Pai rejected proposals from nonprofits to adopt a window for non-commercial providers to offer broadband. Instead, he simply eliminated the educational requirement and adopted a standard auction for distribution of remaining licenses.

Similarly, in the unlicensed space, Pai consistently declined to promote innovation. In the repacking following the broadcast incentive auction, Pai rejected the proposal of structuring the repacking to ensure usable TVWS in every market. Instead, under Pai, the FCC managed the repacking so as to minimize the burden on incumbent primary and secondary licensees. As a result, major markets such as Los Angeles have zero channels available for unlicensed TVWS operation. This effectively relegates the service to a niche rural service, augmenting existing rural wireless ISPs.

The result is a modified form of “command and control,” the now-discredited system where the FCC would allocate licenses to provide specific services such as “FM radio” or “mobile pager service.” While preserving license flexibility in name, the licensing rules are explicitly structured to promote certain types of investment and business cases. The result is to encourage the same types of licensees to offer improved and more powerful versions of the same types of services, while discouraging more radical innovations.

Conclusion

Chairman Pai can rightly take pride in his overall 5G legacy. He preserved the institutional role of the FCC as the agency responsible for expanding our nation’s access to wireless services against sustained attack by federal agencies determined to protect their own spectrum interests. He provided enough green field spectrum for both licensed services and unlicensed services to permit the successful deployment of 5G and Wi-Fi 6. At the same time, however, he failed to encourage more radical spectrum policies that have made the United States the birthplace of such technologies as mobile broadband and Wi-Fi. We have won the “race” to next generation wireless, but the players and services are likely to stay the same.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Randy May is president of the Free State Foundation.]

I am pleased to participate in this retrospective symposium regarding Ajit Pai’s tenure as Federal Communications Commission chairman. I have been closely involved in communications law and policy for nearly 45 years, and, as I’ve said several times since Chairman Pai announced his departure, he will leave as one of the most consequential leaders in the agency’s history. And, I should hastily add, consequential in a positive way, because it’s possible to be consequential in a not-so-positive way.

Chairman Pai’s leadership has been impactful in many different areas—for example, spectrum availability, media deregulation, and institutional reform, to name three—but in this tribute I will focus on his efforts regarding “net neutrality.” I use the quotes because the term has been used by many to mean many different things in many different contexts.

Within a year of becoming chairman, and with the support of fellow Republican commissioners Michael O’Rielly and Brendan Carr, Ajit Pai led the agency in reversing the public utility-like “net neutrality” regulation that had been imposed by the Obama FCC in February 2015 in what became known as the Title II Order. The Title II Order had classified internet service providers (ISPs) as “telecommunications carriers” subject to the same common-carrier regulatory regime imposed on monopolistic Ma Bell during most of the 20th century. While “forbearing” from imposing the full array of traditional common-carrier regulatory mandates, the Title II Order also subjected ISPs to sanctions if they violated an amorphous “general conduct standard,” which provided that ISPs could not “unreasonably” interfere with or disadvantage end users or edge providers like Google, Facebook, and the like.

The aptly styled Restoring Internet Freedom Order (RIF Order), adopted in December 2017, reversed nearly all of the Title II Order’s heavy-handed regulation of ISPs in favor of a light-touch regulatory regime. It was aptly named, because the RIF Order “restored” market “freedom” to internet access regulation that had mostly prevailed since the turn of the 21st century. It’s worth remembering that, in 1999, in opting not to require that newly emerging cable broadband providers be subjected to a public utility-style regime, Clinton-appointee FCC Chairman William Kennard declared: “[T]he alternative is to go to the telephone world…and just pick up this whole morass of regulation and dump it wholesale on the cable pipe. That is not good for America.” And worth recalling, too, that in 2002, the commission, under the leadership of Chairman Michael Powell, determined that “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.”

It was this reliance on market freedom that was “restored” under Ajit Pai’s leadership. In an appearance at a Free State Foundation event in December 2016, barely a month before becoming chairman, then-Commissioner Pai declared: “It is time to fire up the weed whacker and remove those rules that are holding back investment, innovation, and job creation.” And he added: “Proof of market failure should guide the next commission’s consideration of new regulations.” True to his word, the weed whacker was used to cut down the public utility regime imposed on ISPs by his predecessor. And the lack of proof of any demonstrable market failure was at the core of the RIF Order’s reasoning.

It is true that, as a matter of law, the D.C. Circuit’s affirmance of the Restoring Internet Freedom Order in Mozilla v. FCC rested heavily on the application by the court of Chevron deference, just as it is true that Chevron deference played a central role in the affirmance of the Title II Order and the Brand X decision before that. And it would be disingenuous to suggest that, if a newly reconstituted Biden FCC reinstitutes a public utility-like regulatory regime for ISPs, that Chevron deference won’t once again play a central role in the appeal.

But optimist that I am, and focusing not on what possibly may be done as a matter of law, but on what ought to be done as a matter of policy, the “new” FCC should leave in place the RIF Order’s light-touch regulatory regime. In affirming most of the RIF Order in Mozilla, the D.C. Circuit agreed there was substantial evidence supporting the commission’s predictive judgment that reclassification of ISPs “away from public-utility style regulation” was “likely to increase ISP investment and output.” And the court agreed there was substantial evidence to support the commission’s position that such regulation is especially inapt for “a dynamic industry built on technological development and disruption.”

Indeed, the evidence has only become more substantial since the RIF Order’s adoption. Here are only a few factual snippets: According to CTIA, wireless-industry investment for 2019 grew to $29.1 billion, up from $27.4 billion in 2018 and $25.6 billion in 2017USTelecom estimates that wireline broadband ISPs invested approximately $80 billion in network infrastructure in 2018, up more than $3.1 billion from $76.9 billion in 2017. And total investment most likely increased in 2019 for wireline ISPs like it did for wireless ISPs. Figures cited in the FCC’s 2020 Broadband Deployment Report indicate that fiber broadband networks reached an additional 6.5 million homes in 2019, a 16% increase over the prior year and the largest single-year increase ever

Additionally, more Americans have access to broadband internet access services, and at ever higher speeds. According to an April 2020 report by USTelecom, for example, gigabit internet service is available to at least 85% of U.S. homes, compared to only 6% of U.S. homes three-and-a-half years ago. In an October 2020 blog post, Chairman Pai observed that “average download speeds for fixed broadband in the United States have doubled, increasing by over 99%” since the RIF Order was adopted. Ookla Speedtests similarly show significant gains in mobile wireless speeds, climbing to 47/10 Mbps in September 2020 compared to 27/8 Mbps in the first half of 2018.

More evidentiary support could be offered regarding the positive results that followed adoption of the RIF Order, and I assume in the coming year it will be. But the import of abandonment of public utility-like regulation of ISPs should be clear.

There is certainly much that Ajit Pai, the first-generation son of immigrants who came to America seeking opportunity in the freedom it offered, accomplished during his tenure. To my way of thinking, “Restoring Internet Freedom” ranks at—or at least near—the top of the list.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Robert McDowell is a partner with Cooley LLP and a former commissioner of the Federal Communications Commission.]

Many thanks to Geoffrey Manne for this opportunity to memorialize a few thoughts I have about Ajit’s service on the Federal Communications Commission. My remarks will be more about Ajit as a person rather than the substance and long laundry list of his accomplishments as chair. Others will do that, I’m sure.

The first memory I have of meeting Ajit V. Pai reaches back to 2007, after I had served on the commission for about a year. In one of my regular meetings with then-FCC General Counsel Sam Feder, Sam was very proud to introduce me to his new hire. I saw before me an eager and polite young man with a million-watt smile. After reviewing his resume, I immediately recognized that he was already quite accomplished, despite his tender young age: the son of immigrants; hailing from the heart of America as the native of a small town in Kansas; Harvard undergrad with academic distinction; a J.D. from the University of Chicago – also with academic distinction; public service in all three branches of the federal government; and much more.

Wow! “This kid has a very bright future,” I thought. And history proved that, for once, I was right. In fact, Ajit’s appointment to the FCC was one key reason why I decided to step down from the commission before the expiration of my term. But more on that later. As I got to know Ajit more over the years, I learned that he was super bright (not everyone from Harvard is, by the way), exudes a sunny personality and is a principled, common-sense, and compassionate conservative who was dedicated to the rule of law, respecting the wisdom of markets, and serving the public interest.

Like my own Forrest Gump dumb luck in getting to the FCC, Ajit’s path to a seat on the commission came about in part by happenstance. With Commissioner Meredith Attwell Baker’s surprise departure in the spring of 2011, a rare opportunity was suddenly created. Also, like my journey to the commission, a blizzard of names swirled about regarding who might be appointed to that seat by President Barack Obama. Ajit’s name was among the least-known when compared to higher-profile candidates. But once he was nominated, I was excited to reach out to him and offer briefings and anything else he needed to help him prepare for the gauntlet of the Senate confirmation process. It was inspiring to attend his confirmation hearing and to see his immigrant parents smiling so proudly at their talented and accomplished son. Little did either one of us know that his confirmation would be held by senators due to an FCC proceeding that had nothing to do with him. (There’s some irony regarding which proceeding that was, but I digress. Ajit will understand.)

So many months passed by while he waited and waited…and waited for the holds to be lifted so he could be confirmed. In fact, his confirmation lingered for so long it was unclear if he would ever be confirmed. I know that was incredibly frustrating for him and his beautiful family. But eventually, providence smiled upon him and he became my colleague on the commission. Largely ignored by the media, Ajit made history by becoming the first Indian-American appointed to the FCC. In fact, he may be the first, or one of very few, commissioners who was a first-generation American. This wonderful accomplishment should have been celebrated more. But I sense the silence regarding the positive ground-breaking that Ajit achieved in this regard bothers me more than him. And that tells you a lot about his virtues; virtues which would serve him well after becoming chairman.

I always ran to work when I was a commissioner for seven years. I loved that job and I licked the plate clean every day. Upon his swearing-in as my colleague, I could tell instantly that Ajit loved his job as much as I loved mine. Not all commissioners love being commissioners, which I could never understand. With how many jobs are you truly independent and able to touch and improve the daily lives of every American? Ajit understood the value of the gift of being a commissioner right away. While he and I were in the minority on the FCC during the Obama administration, the public should know that the majority of FCC votes back then were bipartisan. But there are a few very important votes which are not unanimous, and those of us in the minority have a sacred role to play: that of respectful but passionate dissenter to help inform the public, the appellate courts, Congress, the White House, and future FCCs about the better path as we saw it.

It was clear that “The Kid,” as I once thought of him, could write fantastic dissents. After a few months of witnessing his talents, and after the 2012 elections, I began to think: “The role of Loyal Opposition will be in fine hands if I step down after nearly seven years. Maybe it is time to let ‘The Kid’ write these dang dissents for the next four years, and then I can be released back into my natural habitat: the private sector.” And so, my thought process evolved. Accordingly, May 17, 2013, the day I left office, Ajit V. Pai became the “senior Republican on the FCC.” Little did either one of us know at the time that that move, combined with a surprise election result in 2016, would pave the path for him to become chairman of the FCC.

Ajit and his team accomplished so much in his four years as chairman. I’ll let others enumerate those accomplishments, but I am delighted to see the eye-popping, jaw-dropping and record-smashing success of the C-Band auction serve as a VERY LOUD and beautiful exclamation point on his legacy. Keep in mind that many of the “best and brightest,” including U.S. Senators and two of his FCC colleagues, said the C-Band auction should either never happen or would be more successful if it had been shaped their way. But the markets have spoken, and the C-Band auction has broken a record of success that may not be surpassed for many years. Ajit, his colleagues Mike O’Rielly and Brendan Carr, and his entire team should be very proud of their handiwork.

In closing, I want to take readers briefly backstage with this still-young man. The wind in his sails is his beautiful bride, Janine. That’s Dr. Janine Van Lancker, a highly accomplished physician. Together with their two beautiful children, they have been Ajit’s Rock of Gibraltar, especially in the most trying of times. I won’t dignify the criminals who threatened their lives by going into detail, but no family of a public servant should ever have to endure what they did. Ever. But the trauma that came with serving did not diminish Ajit’s and Janine’s natural inclination to think of others. While I was on my erstwhile COVID-deathbed last March, Ajit graciously texted me, asking about my condition and offering the help and support of his personal physician, his bride Janine. If you remember nothing else about this blog post, please remember that.

Well done, “Kid from Kansans.” Well done. And thank you.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

The U.S. Department of Justice’s (DOJ) antitrust case against Google, which was filed in October 2020, will be a tough slog.[1] It is an alleged monopolization (Sherman Act, Sec. 2) case; and monopolization cases are always a tough slog.

In this brief essay I will lay out some of the issues in the case and raise an intriguing possibility.

What is the case about?

The case is about exclusivity and exclusion in the distribution of search engine services; that Google paid substantial sums to Apple and to the manufacturers of Android-based mobile phones and tablets and also to wireless carriers and web-browser proprietors—in essence, to distributors—to install the Google search engine as the exclusive pre-set (installed), default search program. The suit alleges that Google thereby made it more difficult for other search-engine providers (e.g., Bing; DuckDuckGo) to obtain distribution for their search-engine services and thus to attract search-engine users and to sell the online advertising that is associated with search-engine use and that provides the revenue to support the search “platform” in this “two-sided market” context.[2]

Exclusion can be seen as a form of “raising rivals’ costs.”[3]  Equivalently, exclusion can be seen as a form of non-price predation. Under either interpretation, the exclusionary action impedes competition.

It’s important to note that these allegations are different from those that motivated an investigation by the Federal Trade Commission (which the FTC dropped in 2013) and the cases by the European Union against Google.[4]  Those cases focused on alleged self-preferencing; that Google was unduly favoring its own products and services (e.g., travel services) in its delivery of search results to users of its search engine. In those cases, the impairment of competition (arguably) happens with respect to those competing products and services, not with respect to search itself.

What is the relevant market?

For a monopolization allegation to have any meaning, there needs to be the exercise of market power (which would have adverse consequences for the buyers of the product). And in turn, that exercise of market power needs to occur in a relevant market: one in which market power can be exercised.

Here is one of the important places where the DOJ’s case is likely to turn into a slog: the delineation of a relevant market for alleged monopolization cases remains as a largely unsolved problem for antitrust economics.[5]  This is in sharp contrast to the issue of delineating relevant markets for the antitrust analysis of proposed mergers.  For this latter category, the paradigm of the “hypothetical monopolist” and the possibility that this hypothetical monopolist could prospectively impose a “small but significant non-transitory increase in price” (SSNIP) has carried the day for the purposes of market delineation.

But no such paradigm exists for monopolization cases, in which the usual allegation is that the defendant already possesses market power and has used the exclusionary actions to buttress that market power. To see the difficulties, it is useful to recall the basic monopoly diagram from Microeconomics 101. A monopolist faces a negatively sloped demand curve for its product (at higher prices, less is bought; at lower prices, more is bought) and sets a profit-maximizing price at the level of output where its marginal revenue (MR) equals its marginal costs (MC). Its price is thereby higher than an otherwise similar competitive industry’s price for that product (to the detriment of buyers) and the monopolist earns higher profits than would the competitive industry.

But unless there are reliable benchmarks as to what the competitive price and profits would otherwise be, any information as to the defendant’s price and profits has little value with respect to whether the defendant already has market power. Also, a claim that a firm does not have market power because it faces rivals and thus isn’t able profitably to raise its price from its current level (because it would lose too many sales to those rivals) similarly has no value. Recall the monopolist from Micro 101. It doesn’t set a higher price than the one where MR=MC, because it would thereby lose too many sales to other sellers of other things.

Thus, any firm—regardless of whether it truly has market power (like the Micro 101 monopolist) or is just another competitor in a sea of competitors—should have already set its price at its profit-maximizing level and should find it unprofitable to raise its price from that level.[6]  And thus the claim, “Look at all of the firms that I compete with!  I don’t have market power!” similarly has no informational value.

Let us now bring this problem back to the Google monopolization allegation:  What is the relevant market?  In the first instance, it has to be “the provision of answers to user search queries.” After all, this is the “space” in which the exclusion occurred. But there are categories of search: e.g., search for products/services, versus more general information searches (“What is the current time in Delaware?” “Who was the 21st President of the United States?”). Do those separate categories themselves constitute relevant markets?

Further, what would the exercise of market power in a (delineated relevant) market look like?  Higher-than-competitive prices for advertising that targets search-results recipients is one obvious answer (but see below). In addition, because this is a two-sided market, the competitive “price” (or prices) might involve payments by the search engine to the search users (in return for their exposure to the lucrative attached advertising).[7]  And product quality might exhibit less variety than a competitive market would provide; and/or the monopolistic average level of quality would be lower than in a competitive market: e.g., more abuse of user data, and/or deterioration of the delivered information itself, via more self-preferencing by the search engine and more advertising-driven preferencing of results.[8]

In addition, a natural focus for a relevant market is the advertising that accompanies the search results. But now we are at the heart of the difficulty of delineating a relevant market in a monopolization context. If the relevant market is “advertising on search engine results pages,” it seems highly likely that Google has market power. If the relevant market instead is all online U.S. advertising (of which Google’s revenue share accounted for 32% in 2019[9]), then the case is weaker; and if the relevant market is all advertising in the United States (which is about twice the size of online advertising[10]), the case is weaker still. Unless there is some competitive benchmark, there is no easy way to delineate the relevant market.[11]

What exactly has Google been paying for, and why?

As many critics of the DOJ’s case have pointed out, it is extremely easy for users to switch their default search engine. If internet search were a normal good or service, this ease of switching would leave little room for the exercise of market power. But in that case, why is Google willing to pay $8-$12 billion annually for the exclusive default setting on Apple devices and large sums to the manufacturers of Android-based devices (and to wireless carriers and browser proprietors)? Why doesn’t Google instead run ads in prominent places that remind users how superior Google’s search results are and how easy it is for users (if they haven’t already done so) to switch to the Google search engine and make Google the user’s default choice?

Suppose that user inertia is important. Further suppose that users generally have difficulty in making comparisons with respect to the quality of delivered search results. If this is true, then being the default search engine on Apple and Android-based devices and on other distribution vehicles would be valuable. In this context, the inertia of their customers is a valuable “asset” of the distributors that the distributors may not be able to take advantage of, but that Google can (by providing search services and selling advertising). The question of whether Google’s taking advantage of this user inertia means that Google exercises market power takes us back to the issue of delineating the relevant market.

There is a further wrinkle to all of this. It is a well-understood concept in antitrust economics that an incumbent monopolist will be willing to pay more for the exclusive use of an essential input than a challenger would pay for access to the input.[12] The basic idea is straightforward. By maintaining exclusive use of the input, the incumbent monopolist preserves its (large) monopoly profits. If the challenger enters, the incumbent will then earn only its share of the (much lower, more competitive) duopoly profits. Similarly, the challenger can expect only the lower duopoly profits. Accordingly, the incumbent should be willing to outbid (and thereby exclude) the challenger and preserve the incumbent’s exclusive use of the input, so as to protect those monopoly profits.

To bring this to the Google monopolization context, if Google does possess market power in some aspect of search—say, because online search-linked advertising is a relevant market—then Google will be willing to outbid Microsoft (which owns Bing) for the “asset” of default access to Apple’s (inertial) device owners. That Microsoft is a large and profitable company and could afford to match (or exceed) Google’s payments to Apple is irrelevant. If the duopoly profits for online search-linked advertising would be substantially lower than Google’s current profits, then Microsoft would not find it worthwhile to try to outbid Google for that default access asset.

Alternatively, this scenario could be wholly consistent with an absence of market power. If search users (who can easily switch) consider Bing to be a lower-quality search service, then large payments by Microsoft to outbid Google for those exclusive default rights would be largely wasted, since the “acquired” default search users would quickly switch to Google (unless Microsoft provided additional incentives for the users not to switch).

But this alternative scenario returns us to the original puzzle:  Why is Google making such large payments to the distributors for those exclusive default rights?

An intriguing possibility

Consider the following possibility. Suppose that Google was paying that $8-$12 billion annually to Apple in return for the understanding that Apple would not develop its own search engine for Apple’s device users.[13] This possibility was not raised in the DOJ’s complaint, nor is it raised in the subsequent suits by the state attorneys general.

But let’s explore the implications by going to an extreme. Suppose that Google and Apple had a formal agreement that—in return for the $8-$12 billion per year—Apple would not develop its own search engine. In this event, this agreement not to compete would likely be seen as a violation of Section 1 of the Sherman Act (which does not require a market delineation exercise) and Apple would join Google as a co-conspirator. The case would take on the flavor of the FTC’s prosecution of “pay-for-delay” agreements between the manufacturers of patented pharmaceuticals and the generic drug manufacturers that challenge those patents and then receive payments from the former in return for dropping the patent challenge and delaying the entry of the generic substitute.[14]

As of this writing, there is no evidence of such an agreement and it seems quite unlikely that there would have been a formal agreement. But the DOJ will be able to engage in discovery and take depositions. It will be interesting to find out what the relevant executives at Google—and at Apple—thought was being achieved by those payments.

What would be a suitable remedy/relief?

The DOJ’s complaint is vague with respect to the remedy that it seeks. This is unsurprising. The DOJ may well want to wait to see how the case develops and then amend its complaint.

However, even if Google’s actions have constituted monopolization, it is difficult to conceive of a suitable and effective remedy. One apparently straightforward remedy would be to require simply that Google not be able to purchase exclusivity with respect to the pre-set default settings. In essence, the device manufacturers and others would always be able to sell parallel default rights to other search engines: on the basis, say, that the default rights for some categories of customers—or even a percentage of general customers (randomly selected)—could be sold to other search-engine providers.

But now the Gilbert-Newbery insight comes back into play. Suppose that a device manufacturer knows (or believes) that Google will pay much more if—even in the absence of any exclusivity agreement—Google ends up being the pre-set search engine for all (or nearly all) of the manufacturer’s device sales, as compared with what the manufacturer would receive if those default rights were sold to multiple search-engine providers (including, but not solely, Google). Can that manufacturer (recall that the distributors are not defendants in the case) be prevented from making this sale to Google and thus (de facto) continuing Google’s exclusivity?[15]

Even a requirement that Google not be allowed to make any payment to the distributors for a default position may not improve the competitive environment. Google may be able to find other ways of making indirect payments to distributors in return for attaining default rights, e.g., by offering them lower rates on their online advertising.

Further, if the ultimate goal is an efficient outcome in search, it is unclear how far restrictions on Google’s bidding behavior should go. If Google were forbidden from purchasing any default installation rights for its search engine, would (inert) consumers be better off? Similarly, if a distributor were to decide independently that its customers were better served by installing the Google search engine as the default, would that not be allowed? But if it is allowed, how could one be sure that Google wasn’t indirectly paying for this “independent” decision (e.g., through favorable advertising rates)?

It’s important to remember that this (alleged) monopolization is different from the Standard Oil case of 1911 or even the (landline) AT&T case of 1984. In those cases, there were physical assets that could be separated and spun off to separate companies. For Google, physical assets aren’t important. Although it is conceivable that some of Google’s intellectual property—such as Gmail, YouTube, or Android—could be spun off to separate companies, doing so would do little to cure the (arguably) fundamental problem of the inert device users.

In addition, if there were an agreement between Google and Apple for the latter not to develop a search engine, then large fines for both parties would surely be warranted. But what next? Apple can’t be forced to develop a search engine.[16] This differentiates such an arrangement from the “pay-for-delay” arrangements for pharmaceuticals, where the generic manufacturers can readily produce a near-identical substitute for the patented drug and are otherwise eager to do so.

At the end of the day, forbidding Google from paying for exclusivity may well be worth trying as a remedy. But as the discussion above indicates, it is unlikely to be a panacea and is likely to require considerable monitoring for effective enforcement.

Conclusion

The DOJ’s case against Google will be a slog. There are unresolved issues—such as how to delineate a relevant market in a monopolization case—that will be central to the case. Even if the DOJ is successful in showing that Google violated Section 2 of the Sherman Act in monopolizing search and/or search-linked advertising, an effective remedy seems problematic. But there also remains the intriguing question of why Google was willing to pay such large sums for those exclusive default installation rights?

The developments in the case will surely be interesting.


[1] The DOJ’s suit was joined by 11 states.  More states subsequently filed two separate antitrust lawsuits against Google in December.

[2] There is also a related argument:  That Google thereby gained greater volume, which allowed it to learn more about its search users and their behavior, and which thereby allowed it to provide better answers to users (and thus a higher-quality offering to its users) and better-targeted (higher-value) advertising to its advertisers.  Conversely, Google’s search-engine rivals were deprived of that volume, with the mirror-image negative consequences for the rivals.  This is just another version of the standard “learning-by-doing” and the related “learning curve” (or “experience curve”) concepts that have been well understood in economics for decades.

[3] See, for example, Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs: Recent Advances in the Theory of Industrial Structure,” American Economic Review, Vol. 73, No. 2 (May 1983), pp.  267-271; and Thomas G. Krattenmaker and Steven C. Salop, “Anticompetitive Exclusion: Raising Rivals’ Costs To Achieve Power Over Price,” Yale Law Journal, Vol. 96, No. 2 (December 1986), pp. 209-293.

[4] For a discussion, see Richard J. Gilbert, “The U.S. Federal Trade Commission Investigation of Google Search,” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 489-513.

[5] For a more complete version of the argument that follows, see Lawrence J. White, “Market Power and Market Definition in Monopolization Cases: A Paradigm Is Missing,” in Wayne D. Collins, ed., Issues in Competition Law and Policy. American Bar Association, 2008, pp. 913-924.

[6] The forgetting of this important point is often termed “the cellophane fallacy”, since this is what the U.S. Supreme Court did in a 1956 antitrust case in which the DOJ alleged that du Pont had monopolized the cellophane market (and du Pont, in its defense claimed that the relevant market was much wider: all flexible wrapping materials); see U.S. v. du Pont, 351 U.S. 377 (1956).  For an argument that profit data and other indicia argued for cellophane as the relevant market, see George W. Stocking and Willard F. Mueller, “The Cellophane Case and the New Competition,” American Economic Review, Vol. 45, No. 1 (March 1955), pp. 29-63.

[7] In the context of differentiated services, one would expect prices (positive or negative) to vary according to the quality of the service that is offered.  It is worth noting that Bing offers “rewards” to frequent searchers; see https://www.microsoft.com/en-us/bing/defaults-rewards.  It is unclear whether this pricing structure of payment to Bing’s customers represents what a more competitive framework in search might yield, or whether the payment just indicates that search users consider Bing to be a lower-quality service.

[8] As an additional consequence of the impairment of competition in this type of search market, there might be less technological improvement in the search process itself – to the detriment of users.

[9] As estimated by eMarketer: https://www.emarketer.com/newsroom/index.php/google-ad-revenues-to-drop-for-the-first-time/.

[10] See https://www.visualcapitalist.com/us-advertisers-spend-20-years/.

[11] And, again, if we return to the du Pont cellophane case:  Was the relevant market cellophane?  Or all flexible wrapping materials?

[12] This insight is formalized in Richard J. Gilbert and David M.G. Newbery, “Preemptive Patenting and the Persistence of Monopoly,” American Economic Review, Vol. 72, No. 3 (June 1982), pp. 514-526.

[13] To my knowledge, Randal C. Picker was the first to suggest this possibility; see https://www.competitionpolicyinternational.com/a-first-look-at-u-s-v-google/.  Whether Apple would be interested in trying to develop its own search engine – given the fiasco a decade ago when Apple tried to develop its own maps app to replace the Google maps app – is an open question.  In addition, the Gilbert-Newbery insight applies here as well:  Apple would be less inclined to invest the substantial resources that would be needed to develop a search engine when it is thereby in a duopoly market.  But Google might be willing to pay “insurance” to reinforce any doubts that Apple might have.

[14] The U.S. Supreme Court, in FTC v. Actavis, 570 U.S. 136 (2013), decided that such agreements could be anti-competitive and should be judged under the “rule of reason”.  For a discussion of the case and its implications, see, for example, Joseph Farrell and Mark Chicu, “Pharmaceutical Patents and Pay-for-Delay: Actavis (2013),” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 331-353.

[15] This is an example of the insight that vertical arrangements – in this case combined with the Gilbert-Newbery effect – can be a way for dominant firms to raise rivals’ costs.  See, for example, John Asker and Heski Bar-Isaac. 2014. “Raising Retailers’ Profits: On Vertical Practices and the Exclusion of Rivals.” American Economic Review, Vol. 104, No. 2 (February 2014), pp. 672-686.

[16] And, again, for the reasons discussed above, Apple might not be eager to make the effort.

Admirers of the late Supreme Court Justice Louis Brandeis and other antitrust populists often trace the history of American anti-monopoly sentiments from the Founding Era through the Progressive Era’s passage of laws to fight the scourge of 19th century monopolists. For example, Matt Stoller of the American Economic Liberties Project, both in his book Goliath and in other writings, frames the story of America essentially as a battle between monopolists and anti-monopolists.

According to this reading, it was in the late 20th century that powerful corporations and monied interests ultimately succeeded in winning the battle in favor of monopoly power against antitrust authorities, aided by the scholarship of the “ideological” Chicago school of economics and more moderate law & economics scholars like Herbert Hovenkamp of the University of Pennsylvania Law School.

It is a framing that leaves little room for disagreements about economic theory or evidence. One is either anti-monopoly or pro-monopoly, anti-corporate power or pro-corporate power.

What this story muddles is that the dominant anti-monopoly strain from English common law, which continued well into the late 19th century, was opposed specifically to government-granted monopoly. In contrast, today’s “anti-monopolists” focus myopically on alleged monopolies that often benefit consumers, while largely ignoring monopoly power granted by government. The real monopoly problem antitrust law fails to solve is its immunization of anticompetitive government policies. Recovering the older anti-monopoly tradition would better focus activists today.

Common Law Anti-Monopoly Tradition

Scholars like Timothy Sandefur of the Goldwater Institute have written about the right to earn a living that arose out of English common law and was inherited by the United States. This anti-monopoly stance was aimed at government-granted privileges, not at successful business ventures that gained significant size or scale.

For instance, 1602’s Darcy v. Allein, better known as the “Case of Monopolies,” dealt with a “patent” originally granted by Queen Elizabeth I in 1576 to Ralph Bowes, and later bought by Edward Darcy, to make and sell playing cards. Darcy did not innovate playing cards; he merely had permission to be the sole purveyor. Thomas Allein, who attempted to sell playing cards he created, was sued for violating Darcy’s exclusive rights. Darcy’s monopoly ultimately was held to be invalid by the court, which refused to convict Allein.

Edward Coke, who actually argued on behalf of the patent in Darcy v. Allen, wrote that the case stood for the proposition that:

All trades, as well mechanical as others, which prevent idleness (the bane of the commonwealth) and exercise men and youth in labour, for the maintenance of themselves and their families, and for the increase of their substance, to serve the Queen when occasion shall require, are profitable for the commonwealth, and therefore the grant to the plaintiff to have the sole making of them is against the common law, and the benefit and liberty of the subject. (emphasis added)

In essence, Coke’s argument was more closely linked to a “right to work” than to market structures, business efficiency, or firm conduct.

The courts largely resisted royal monopolies in 17th century England, finding such grants to violate the common law. For instance, in The Case of the Tailors of Ipswich, the court cited Darcy and found:

…at the common law, no man could be prohibited from working in any lawful trade, for the law abhors idleness, the mother of all evil… especially in young men, who ought in their youth, (which is their seed time) to learn lawful sciences and trades, which are profitable to the commonwealth, and whereof they might reap the fruit in their old age, for idle in youth, poor in age; and therefore the common law abhors all monopolies, which prohibit any from working in any lawful trade. (emphasis added)

The principles enunciated in these cases were eventually codified in the Statute of Monopolies, which prohibited the crown from granting monopolies in most circumstances. This was especially the case when the monopoly prevented the right to otherwise lawful work.

This common-law tradition also had disdain for private contracts that created monopoly by restraining the right to work. For instance, the famous Dyer’s case of 1414 held that a contract in which John Dyer promised not to practice his trade in the same town as the plaintiff was void for being an unreasonable restraint on trade.The judge is supposed to have said in response to the plaintiff’s complaint that he would have imprisoned anyone who had claimed such a monopoly on his own authority.

Over time, the common law developed analysis that looked at the reasonableness of restraints on trade, such as the extent to which they were limited in geographic reach and duration, as well as the consideration given in return. This part of the anti-monopoly tradition would later constitute the thread pulled on by the populists and progressives who created the earliest American antitrust laws.

Early American Anti-Monopoly Tradition

American law largely inherited the English common law system. It also inherited the anti-monopoly tradition the common law embodied. The founding generation of American lawyers were trained on Edward Coke’s commentary in “The Institutes of the Laws of England,” wherein he strongly opposed government-granted monopolies.

This sentiment can be found in the 1641 Massachusetts Body of Liberties, which stated: “No monopolies shall be granted or allowed amongst us, but of such new Inventions that are profitable to the Countrie, and that for a short time.” In fact, the Boston Tea Party itself was in part a protest of the monopoly granted to the East India Company, which included a special refund from duties by Parliament that no other tea importers enjoyed.

This anti-monopoly tradition also can be seen in the debates at the Constitutional Convention. A proposal to give the federal government power to grant “charters of incorporation” was voted down on fears it could lead to monopolies. Thomas Jefferson, George Mason, and several Antifederalists expressed concerns about the new national government’s ability to grant monopolies, arguing that an anti-monopoly clause should be added to the Constitution. Six states wanted to include provisions that would ban monopolies and the granting of special privileges in the Constitution.

The American anti-monopoly tradition remained largely an anti-government tradition throughout much of the 19th century, rearing its head in debates about the Bank of the United States, publicly-funded internal improvements, and government-granted monopolies over bridges and seas. Pamphleteer Lysander Spooner even tried to start a rival to the Post Office by appealing to the strong American impulse against monopoly.

Coinciding with the Industrial Revolution, liberalization of corporate law made it easier for private persons to organize firms that were not simply grants of exclusive monopoly. But discontent with industrialization and other social changes contributed to the birth of a populist movement, and later to progressives like Brandeis, who focused on private combinations and corporate power rather than government-granted privileges. This is the strand of anti-monopoly sentiment that continues to dominate the rhetoric today.

What This Means for Today

Modern anti-monopoly advocates have largely forgotten the lessons of the long Anglo-American tradition that found government is often the source of monopoly power. Indeed, American law privileges government’s ability to grant favors to businesses through licensing, the tax code, subsidies, and even regulation. The state action doctrine from Parker v. Brown exempts state and municipal authorities from antitrust lawsuits even where their policies have anticompetitive effects. And the Noerr-Pennington doctrine protects the rights of industry groups to lobby the government to pass anticompetitive laws.

As a result, government is often used to harm competition, with no remedy outside of the political process that created the monopoly. Antitrust law is used instead to target businesses built by serving consumers well in the marketplace.

Recovering this older anti-monopoly tradition would help focus the anti-monopoly movement on a serious problem modern antitrust misses. While the consumer-welfare standard that modern antitrust advocates often decry has helped to focus the law on actual harms to consumers, antitrust more broadly continues to encourage rent-seeking by immunizing state action and lobbying behavior.

With the COVID-19 vaccine made by Moderna joining the one from Pfizer and BioNTech in gaining approval from the U.S. Food and Drug Administration, it should be time to celebrate the U.S. system of pharmaceutical development. The system’s incentives—notably granting patent rights to firms that invest in new and novel discoveries—have worked to an astonishing degree, producing not just one but as many as three or four effective approaches to end a viral pandemic that, just a year ago, was completely unknown.

Alas, it appears not all observers agree. Now that we have the vaccines, some advocate suspending or limiting patent rights—for example, by imposing a compulsory licensing scheme—with the argument that this is the only way for the vaccines to be produced in mass quantities worldwide. Some critics even assert that abolishing or diminishing property rights in pharmaceuticals is needed to end the pandemic.

In truth, we can effectively and efficiently distribute the vaccines while still maintaining the integrity of our patent system. 

What the false framing ignores are the important commercialization and distribution functions that patents provide, as well as the deep, long-term incentives the patent system provides to create medical innovations and develop a robust pharmaceutical supply chain. Unless we are sure this is the last pandemic we will ever face, repealing intellectual property rights now would be a catastrophic mistake.

The supply chains necessary to adequately scale drug production are incredibly complex, and do not appear overnight. The coordination and technical expertise needed to support worldwide distribution of medicines depends on an ongoing pipeline of a wide variety of pharmaceuticals to keep the entire operation viable. Public-spirited officials may in some cases be able to piece together facilities sufficient to produce and distribute a single medicine in the short term, but over the long term, global health depends on profit motives to guarantee the commercialization pipeline remains healthy. 

But the real challenge is in maintaining proper incentives to develop new drugs. It has long been understood that information goods like intellectual property will be undersupplied without sufficient legal protections. Innovators and those that commercialize innovations—like researchers and pharmaceutical companies—have less incentive to discover and market new medicines as the likelihood that they will be able to realize a return for their efforts diminishes. Without those returns, it’s far less certain the COVID vaccines would have been produced so quickly, or at all. The same holds for the vaccines we will need for the next crisis or badly needed treatments for other deadly diseases.

Patents are not the only way to structure incentives, as can be seen with the current vaccines. Pharmaceutical companies also took financial incentives from various governments in the form of direct payment or in purchase guarantees. But this enhances, rather than diminishes, the larger argument. There needs to be adequate returns for those who engage in large, risky undertakings like creating a new drug. 

Some critics would prefer to limit pharmaceutical companies’ returns solely to those early government investments, but there are problems with this approach. It is difficult for governments to know beforehand what level of profit is needed to properly incentivize firms to engage in producing these innovations.  To the extent that direct government investment is useful, it often will be as an additional inducement that encourages new entry by multiple firms who might each pursue different technologies. 

Thus, in the case of coronavirus vaccines, government subsidies may have enticed more competitors to enter more quickly, or not to drop out as quickly, in hopes that they would still realize a profit, notwithstanding the risks. Where there might have been only one or two vaccines produced in the United States, it appears likely we will see as many as four.

But there will always be necessary trade-offs. Governments cannot know how to set proper incentives to encourage development of every possible medicine for every possible condition by every possible producer.  Not only do we not know which diseases and which firms to prioritize, but we have no idea how to determine which treatment approaches to encourage. 

The COVID-19 vaccines provide a clear illustration of this problem. We have seen development of both traditional vaccines and experimental mRNA treatments to combat the virus. Thankfully, both have shown positive results, but there was no way to know that in March. In this perennial state of ignorance,t markets generally have provided the best—though still imperfect—way to make decisions. 

The patent system’s critics sometimes claim that prizes would offer a better way to encourage discovery. But if we relied solely on government-directed prizes, we might never have had the needed research into the technology that underlies mRNA. As one recent report put it, “before messenger RNA was a multibillion-dollar idea, it was a scientific backwater.” Simply put, without patent rights as the backstop to purely academic or government-led innovation and commercialization, it is far less likely that we would have seen successful COVID vaccines developed as quickly.

It is difficult for governments to be prepared for the unknown. Abolishing or diminishing pharmaceutical patents would leave us even less prepared for the next medical crisis. That would only add to the lasting damage that the COVID-19 pandemic has already wrought on the world.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.