Archives For technology

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Justin “Gus” Hurwitz is associate professor of law, the Menard Director of the Nebraska Governance and Technology Center, and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska College of Law. He is also director of law & economics programs at the International Center for Law & Economics.]

I was having a conversation recently with a fellow denizen of rural America, discussing how to create opportunities for academics studying the digital divide to get on-the-ground experience with the realities of rural telecommunications. He recounted a story from a telecom policy event in Washington, D.C., from not long ago. The story featured a couple of well-known participants in federal telecom policy as they were talking about how to close the rural digital divide. The punchline of the story was loud speculation from someone in attendance that neither of these bloviating telecom experts had likely ever set foot in a rural town.

And thus it is with most of those who debate and make telecom policy. The technical and business challenges of connecting rural America are different. Rural America needs different things out of its infrastructure than urban America. And the attitudes of both users and those providing service are different here than they are in urban America.

Federal Communications Commission Chairman Aji Pai—as I get to refer to him in writing for perhaps the last time—gets this. As is well-known, he is a native Kansan. He likely spent more time during his time as chairman driving rural roads than this predecessor spent hobnobbing at political fundraisers. I had the opportunity on one of these trips to visit a Nebraska farm with him. He was constantly running a bit behind schedule on this trip. I can attest that this is because he would wander off with a farmer to look at a combine or talk about how they were using drones to survey their fields. And for those cynics out there—I know there are some who don’t believe in the chairman’s interest in rural America—I can tell you that it meant a lot to those on the ground who had the chance to share their experiences.

Rural Digital Divide Policy on the Ground

Closing the rural digital divide is a defining public-policy challenge of telecommunications. It’s right there in the first sentence of the Communications Act, which established the FCC:

For the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States…a rapid, efficient, Nation-wide, and world-wide wire and radio communication service[.]

Depending on how one defines broadband internet, somewhere between 18 and 35 million Americans lack broadband internet access. No matter how you define it, however, most of those lacking access are in rural America.

It’s unsurprising why this is the case. Looking at North Dakota, South Dakota, and Nebraska—three of the five most expensive states to connect each household in both the 2015 and 2018 Connect America Fund models—the cost to connect a household to the internet in these states was twice that of connecting a household in the rest of the United States. Given the low density of households in these areas, often less than one household per square mile, there are relatively fewer economies of scale that allow carriers to amortize these costs across multiple households. We can add that much of rural America is both less wealthy than more urban areas and often doesn’t value the benefits of high-speed internet as highly. Taken together, the cost of providing service in these areas is much higher, and the demand for them much less, than in more urban areas.

On the flip side are the carriers and communities working to provide access. The reality in these states is that connecting those who live here is an all-hands-on-deck exercise. I came to Nebraska with the understanding that cable companies offer internet service via cable and telephone companies offer internet service via DSL or fiber. You can imagine my surprise the first time I spoke to a carrier who was using a mix of cable, DSL, fiber, microwave, and Wi-Fi to offer service to a few hundred customers. And you can also imagine my surprise when he started offering advice to another carrier—ostensibly a competitor—about how to get more performance out of some older equipment. Just last week, I was talking to a mid-size carrier about how they are using fixed wireless to offer service to customers outside of their service area as a stopgap until fiber gets out to the customer’s house.

Pai’s Progress Closing the Rural Digital Divide

This brings us to Chairman Pai’s work to close the rural digital divide. Literally on his first day on the job, he announced that his top priority was closing the digital divide. And he backed this up both with the commission’s agenda and his own time and attention.

On Chairman Pai’s watch, the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity. The recently completed RDOF auction promises to connect 10 million rural Americans to the internet; the 5G Fund will ensure that all but the most difficult-to-connect areas of the country will be covered by 5G mobile wireless. These are top-line items on Commissioner Pai’s resume as chairman. But it is important to recognize how much of a break they were from the commission’s previous approach to universal service and the digital divide. These funding mechanisms are best characterized by their technology-neutral, reverse-auction based approach to supporting service deployment.

This is starkly different from prior generations of funding, which focused on subsidizing specific carriers to provide specific levels of service using specific technologies. As I said above, the reality on the ground in rural America is that closing the digital divide is an all-hands-on-deck exercise. It doesn’t matter who is offering service or what technology they are using. Offering 10 mbps service today over a rusty barbed wire fence or a fixed wireless antenna hanging off the branch of a tree is better than offering no service or promising fiber that’s going to take two years to get into the ground. And every dollar saved by connecting one house with a lower-cost technology is a dollar that can be used to connect another house that may otherwise have gone unconnected.

The combination of the reverse-auction and technology-neutral approaches has made it possible for the commission to secure commitments to connect a record number of houses with high-speed internet over an incredibly short period of time.

Then there are the chairman’s accomplishments on the spectrum and wirelessinternet fronts. Here, he faced resistance from both within the government and industry. In some of the more absurd episodes of government in-fighting, he tangled with protectionist interests within the government to free up CBRS and other mid-band spectrum and to authorize new satellite applications. His support of fixed and satellite wireless has the potential to legitimately shake up the telecom industry. I honestly have no idea whether this is going to prove to be a good or bad bet in the long term—whether fixed wireless is going to be able to offer the quality and speed of service its proponents promise or whether it instead will be a short-run misallocation of capital that will require clawbacks and re-awards of funding in another few years—but the embrace of the technology demonstrated decisive leadership and thawed a too limited and ossified understanding of what technologies could be used to offer service. Again, as said above, closing the rural digital divide is an all-hands-on-deck problem; we do ourselves no favors by excluding possible solutions from our attempts to address it.

There is more that the commission did under Chairman Pai’s leadership, beyond the commission’s obvious order and actions, to close the rural digital divide. Over the past two years, I have had opportunities to work with academic colleagues from other disciplines on a range of federal funding opportunities for research and development relating to next generation technologies to support rural telecommunications, such as programs through the National Science Foundation. It has been wonderful to see increased FCC involvement in these programs. And similarly, another of Chairman Pai’s early initiatives was to establish the Broadband Deployment Advisory Committee. It has been rare over the past few years for me to be in a meeting with rural stakeholders that didn’t also include at least one member of a BDAC subcommittee. The BDAC process was a valuable way to communicate information up the chair, to make sure that rural stakeholders’ voices were heard in D.C.

But the BDAC process had another important effect: it made clear that there was someone in D.C. who was listening. Commissioner Pai said on his first day as chairman that closing the digital divide was his top priority. That’s easy to just say. But establishing a committee framework that ensures that stakeholders regularly engage with an appointed representative of the FCC, putting in the time and miles to linger with a farmer to talk about the upcoming harvest season, these things make that priority real.

Rural America certainly hopes that the next chair of the commission will continue to pay us as much attention as Chairman Pai did. But even if they don’t, we can rest with some comfort that he has set in motion efforts—from the next generation of universal service programs to supporting research that will help develop the technologies that will come after—that will serve us will for years to come.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Brent Skorup is a senior research fellow at the Mercatus Center at George Mason University.]

Ajit Pai came into the Federal Communications Commission chairmanship with a single priority: to improve the coverage, cost, and competitiveness of U.S. broadband for the benefit of consumers. The 5G Fast Plan, the formation of the Broadband Deployment Advisory Committee, the large spectrum auctions, and other broadband infrastructure initiatives over the past four years have resulted in accelerated buildouts and higher-quality services. Millions more Americans have gotten connected because of agency action and industry investment.

That brings us to Chairman Pai’s most important action: restoring the deregulatory stance of the FCC toward broadband services and repealing the Title II “net neutrality” rules in 2018. Had he not done this, his and future FCCs would have been bogged down in inscrutable, never-ending net neutrality debates, reminiscent of the Fairness Doctrine disputes that consumed the agency 50 years ago. By doing that, he cleared the decks for the pro-deployment policies that followed and redirected the agency away from its roots in mass-media policy toward a future where the agency’s primary responsibilities are encouraging broadband deployment and adoption.

It took tremendous courage from Chairman Pai and Commissioners Michael O’Rielly and Brendan Carr to vote to repeal the 2015 Title II regulations, though they probably weren’t prepared for the public reaction to a seemingly arcane dispute over regulatory classification. The hysteria ginned up by net-neutrality advocates, members of Congress, celebrities, and too-credulous journalists was unlike anything I’ve seen in political advocacy. Advocates, of course, don’t intend to provoke disturbed individuals but the irresponsible predictions of “the end of the internet as we know it” and widespread internet service provider (ISP) content blocking drove one man to call in a bomb threat to the FCC, clearing the building in a desperate attempt to delay or derail the FCC’s Title II repeal. At least two other men pleaded guilty to federal charges after issuing vicious death threats to Chairman Pai, a New York congressman, and their families in the run-up to the regulation’s repeal. No public official should have to face anything resembling that over a policy dispute.

For all the furor, net-neutrality advocates promised a neutral internet that never was and never will be. ”Happy little bunny rabbit dreams” is how David Clark of MIT, an early chief protocol architect of the internet, derided the idea of treating all online traffic the same. Relatedly, the no-blocking rule—the sine na qua of net neutrality—was always a legally dubious requirement. Legal scholars for years had called into doubt the constitutionality of imposing must-carry requirements on ISPs. Unsurprisingly, a federal appellate judge pressed this point in oral arguments defending the net neutrality rules in 2016. The Obama FCC attorney conceded without a fight; even after the net neutrality order, ISPs were “absolutely” free to curate the internet.

Chairman Pai recognized that the fight wasn’t about website blocking and it wasn’t, strictly speaking, about net neutrality. This was the latest front in the long battle over whether the FCC should strictly regulate mass-media distribution. There is a long tradition of progressive distrust of new (unregulated) media. The media access movement that pushed for broadcast TV and radio and cable regulations from the 1960s to 1980s never went away, but the terminology has changed: disinformation, net neutrality, hate speech, gatekeeper.

The decline in power of regulated media—broadcast radio and TV—and the rising power of unregulated internet-based media—social media, Netflix, and podcasts—meant that the FCC and Congress had few ways to shape American news and media consumption. In the words of Tim Wu, the law professor who coined the term “net neutrality,” the internet rules are about giving the agency the continuing ability to shape “media policy, social policy, oversight of the political process, [and] issues of free speech.”

Title II was the only tool available to bring this powerful new media—broadband access—under intense regulatory scrutiny by regulators and the political class. As net-neutrality advocate and Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition. Net neutrality would draw the agency into contentious mass-media regulation once again, distracting it from universal service efforts, spectrum access and auctions, and cleaning up the regulatory detritus that had slowly accumulated since the passage of the agency’s guiding statutes: the 1934 Communications Act and the 1996 Telecommunications Act.

There are probably items that Chairman Pai wish he’d finished or had done slightly differently. He’s left a proud legacy, however, and his politically risky decision to repeal the Title II rules redirected agency energies away from no-win net-neutrality battles and toward broadband deployment and infrastructure. Great progress was made and one hopes the Biden FCC chairperson will continue that trajectory that Pai set.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Seth L. Cooper is director of policy studies and a senior fellow at the Free State Foundation.]

During Chairman Ajit Pai’s tenure, the Federal Communications Commission adopted key reforms that improved the agency’s processes. No less important than process reform is process integrity. The commission’s L-Band Order and the process that produced it will be the focus here. In that proceeding, Chairman Pai led a careful and deliberative process that resulted in a clearly reasoned and substantively supportable decision to put unused valuable L-Band spectrum into commercial use for wireless services.

Thanks to one of Chairman Pai’s most successful process reforms, the FCC now publicly posts draft items to be voted on three weeks in advance of the commission’s public meetings. During his chairmanship, the commission adopted reforms to help expedite the regulatory-adjudication process by specifying deadlines and facilitating written administrative law judge (ALJ) decisions rather than in-person hearings. The “Team Telecom” process also was reformed to promote faster agency determinations on matters involving foreign ownership.

Along with his process-reform achievements, Chairman Pai deserves credit for ensuring that the FCC’s proceedings were conducted in a lawful and sound manner. For example, the commission’s courtroom track record was notably better during Chairman Pai’s tenure than during the tenures of his immediate predecessors. Moreover, Chairman Pai deserves high marks for the agency process that preceded the L-Band Order – a process that was perhaps subject to more scrutiny than the process of any other proceeding during his chairmanship. The public record supports the integrity of that process, as well as the order’s merits.

In April 2020, the FCC unanimously approved an order authorizing Ligado Networks to deploy a next-generation mixed mobile-satellite network using licensed spectrum in the L-Band. This action is critical to alleviating the shortage of commercial spectrum in the United States and to ensuring our nation’s economic competitiveness. Ligado’s proposed network will provide industrial Internet-of-Things (IoT) services, and its L-Band spectrum has been identified as capable of pairing with C-Band and other mid-band spectrum for delivering future 5G services. According to the L-Band Order, Ligado plans to invest up to $800 million in network capabilities, which could create over 8,000 jobs. Economist Coleman Bazelon estimated that Ligado’s network could help create up to 3 million jobs and contribute up to $500 billion to the U.S. economy.

Opponents of the L-Band Order have claimed that Ligado’s proposed network would create signal interference with GPS services in adjacent spectrum. Moreover, in attempts to delay or undo implementation of the L-Band Order, several opponents lodged harsh but baseless attacks against the FCC’s process. Some of those process criticisms were made at a May 2020 Senate Armed Services Committee hearing that failed to include any Ligado representatives or any FCC commissioners for their viewpoints. And in a May 2020 floor speech, Sen. James Inhofe (R-Okla.) repeatedly criticized the commission’s process as sudden, hurried, and taking place “in the darkness of a weekend.”

But those process criticisms fail in the face of easily verifiable facts. Under Chairman Pai’s leadership, the FCC acted within its conceded authority, consistent with its lawful procedures, and with careful—even lengthy—deliberation.

The FCC’s proceeding concerning Ligado’s license applications dates back to 2011. It included public notice and comment periods in 2016 and 2018. An August 2019 National Telecommunications and Information Administration (NTIA) report noted the commission’s forthcoming decision. In the fall of 2019, the commission shared a draft of its order with NTIA. Publicly stated opposition to Ligado’s proposed network by GPS operators and Defense Secretary Mark Esper, as well as publicly stated support for the network by Attorney General William Barr and Secretary of State Mike Pompeo, ensured that the proceeding received ongoing attention. Claims of “surprise” when the commission finalized its order in April 2020 are impossible to credit.

Importantly, the result of the deliberative agency process helmed by Chairman Pai was a substantively supportable decision. The FCC applied its experience in adjudicating competing technical claims to make commercial spectrum policy decisions. It was persuaded in part by signal testing conducted by the National Advanced Spectrum and Communications Test Network, as well as testing by technology consultants Roberson and Associates. By contrast, the commission found unpersuasive reports of alleged signal interference involving military devices operating outside of their assigned spectrum band.

The FCC also applied its expertise in addressing potential harmful signal interference to incumbent operations in adjacent spectrum bands by imposing several conditions on Ligado’s operations. For example, the L-Band Order requires Ligado to adhere to its agreements with major GPS equipment manufacturers for resolving signal interference concerns. Ligado must dedicate 23 megahertz of its own licensed spectrum as a guard-band from neighboring spectrum and also reduce its base station power levels 99% compared to what Ligado proposed in 2015. The commission requires Ligado to expeditiously replace or repair any U.S. government GPS devices that experience harmful interference from its network. And Ligado must maintain “stop buzzer” capability to halt its network within 15 minutes of any request by the commission.

From a process standpoint, the L-Band Order is a commendable example of Chairman Pai’s perseverance in leading the FCC to a much-needed decision on an economically momentous matter in the face of conflicting government agency and market provider viewpoints. Following a careful and deliberative process, the commission persevered to make a decision that is amply supported by the record and poised to benefit America’s economic welfare.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Mark Jamison is the Gerald L. Gunter Memorial Professor and director of the Public Utility Research Center at the University of Florida’s Warrington College of Business. He’s also a visiting scholar at the American Enterprise Institute.]

Chairman Ajit Pai will be remembered as one of the most consequential Federal Communications Commission chairmen in history. His policy accomplishments are numerous, including the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls.

Less will be said about the important work he has done rebuilding the FCC’s independence. It is rare for a new FCC chairman to devote resources to building the institution. Most focus on their policy agendas, because policies and regulations make up their legacies that the media notices, and because time and resources are limited. Chairman Pai did what few have even attempted to do: both build the organization and make significant regulatory reforms.

Independence is the ability of a regulatory institution to operate at arm’s length from the special interests of industry, politicians, and the like. The pressures to bias actions to benefit favored stakeholders can be tremendous; the FCC greatly influences who gets how much of the billions of dollars that are at stake in FCC decisions. But resisting those pressures is critical because investment and services suffer when a weak FCC is directed by political winds or industry pressures rather than law and hard analysis.

Chairman Pai inherited a politicized FCC. Research by Scott Wallsten showed that commission votes had been unusually partisan under the previous chairman (November 2013 through January 2017). From the beginning of Reed Hundt’s term as chairman until November 2013, only 4% of commission votes had divided along party lines. By contrast, 26% of votes divided along party lines from November 2013 until Chairman Pai took over. This division was also reflected in a sharp decline in unanimous votes under the previous administration. Only 47% of FCC votes on orders were unanimous, as opposed to an average of 60% from Hundt through the brief term of Mignon Clyburn.

Chairman Pai and his fellow commissioners worked to heal this divide. According to the FCC’s data, under Chairman Pai, over 80% of items on the monthly meeting agenda had bipartisan support and over 70% were adopted without dissent. This was hard, as Democrats in general were deeply against President Donald Trump and some members of Congress found a divided FCC convenient.

The political orientation of the FCC prior to Chairman Pai was made clear in the management of controversial issues. The agency’s work on net neutrality in 2015 pivoted strongly toward heavy regulation when President Barack Obama released his video supporting Title II regulation of the internet. And there is evidence that the net-neutrality decision was made in the White House, not at the FCC. Agency economists were cut out of internal discussions once the political decision had been made to side with the president, causing the FCC’s chief economist to quip that the decision was an economics-free zone.

On other issues, a vote on Lifeline was delayed several hours so that people on Capitol Hill could lobby a Democratic commissioner to align with fellow Democrats and against the Republican commissioners. And an initiative to regulate set-top boxes was buoyed, not by analyses by FCC staff, but by faulty data and analyses from Democratic senators.

Chairman Pai recognized the danger of politically driven decision-making and noted that it was enabled in part by the agency’s lack of a champion for economic analyses. To remedy this situation, Chairman Pai proposed forming an Office of Economics and Analytics (OEA). The commission adopted his proposal, but unfortunately it was with one of the rare party-line votes. Hopefully, Democratic commissioners have learned the value of the OEA.

The OEA has several responsibilities, but those most closely aligned with supporting the agency’s independence are that it: (a) provides economic analysis, including cost-benefit analysis, for commission actions; (b) develops policies and strategies on data resources and best practices for data use; and (c) conducts long-term research. The work of the OEA makes it hard for a politically driven chairman to pretend that his or her initiatives are somehow substantive.

Another institutional weakness at the FCC was a lack of transparency. Prior to Chairman Pai, the public was not allowed to view the text of commission decisions until after they were adopted. Even worse, sometimes the text that the commissioners saw when voting was not the text in the final decision. Wallsten described in his research a situation where the meaning of a vote actually changed from the time of the vote to the release of the text:

On February 9, 2011 the Federal Communications Commission (FCC) released a proposed rule that included, among many other provisions, capping the Universal Service Fund at $4.5 billion. The FCC voted to approve a final order on October 27, 2011. But when the order was finally released on November 18, 2011, the $4.5 billion ceiling had effectively become a floor, with the order requiring the agency to forever estimate demand at no less than $4.5 billion. Because payments from the fund had been decreasing steadily, this floor means that the FCC is now collecting hundreds of billions of dollars more in taxes than it is spending on the program. [footnotes omitted]

The lack of transparency led many to not trust the FCC and encouraged stakeholders with inside access to bypass the legitimate public process for lobbying the agency. This would have encouraged corruption had not Chairman Pai changed the system. He required that decision texts be released to the public at the same time they were released to commissioners. This allows the public to see what the commissioners are voting on. And it ensures that orders do not change after they are voted on.

The FCC demonstrated its independence under Chairman Pai. In the case of net neutrality, the three Republican commissioners withstood personal threats, mocking from congressional Democrats, and pressure from Big Tech to restore light-handed regulation. About a year later, Chairman Pai was strongly criticized by President Trump for rejecting the Sinclair-Tribune merger. And despite the president’s support of the merger, he apparently had sufficient respect for the FCC’s independence that the White House never contacted the FCC about the issue. In the case of Ligado Networks’ use of its radio spectrum license, the FCC stood up to intense pressure from the U.S. Department of Defense and from members of Congress who wanted to substitute their technical judgement for the FCC’s research on the impacts of Ligado’s proposal.

It is possible that a new FCC could undo this new independence. Commissioners could marginalize their economists, take their directions from partisans, and reintroduce the practice of hiding information from the public. But Chairman Pai foresaw this and carefully made his changes part of the institutional structure of the FCC, making any steps backward visible to all concerned.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Harold Feld is senior vice president of Public Knowledge.]

Chairman Ajit Pai prioritized making new spectrum available for 5G. To his credit, he succeeded. Over the course of four years, Chairman Pai made available more high-band and mid-band spectrum, for licensed use and unlicensed use, than any other Federal Communications Commission chairman. He did so in the face of unprecedented opposition from other federal agencies, navigating the chaotic currents of the Trump administration with political acumen and courage. The Pai FCC will go down in history as the 5G FCC, and as the chairman who protected the primacy of FCC control over commercial spectrum policy.

At the same time, the Pai FCC will also go down in history as the most conventional FCC on spectrum policy in the modern era. Chairman Pai undertook no sweeping review of spectrum policy in the manner of former Chairman Michael Powell and no introduction of new and radically different spectrum technologies such as the introduction of unlicensed spectrum and spread spectrum in the 1980s, or the introduction of auctions in the 1990s. To the contrary, Chairman Pai actually rolled back the experimental short-term license structure adopted in the 3.5 GHz Citizens Broadband Radio Service (CBRS) band and replaced it with the conventional long-term with renewal expectation license. He missed a once-in-a-lifetime opportunity to dramatically expand the availability of unlicensed use of the TV white spaces (TVWS) via repacking after the television incentive auction. In reworking the rules for the 2.5 GHz band, although Pai laudably embraced the recommendation to create an application window for rural tribal lands, he rejected the proposal to allow nonprofits a chance to use the band for broadband in favor of conventional auction policy.

Ajit Pai’s Spectrum Policy Gave the US a Strong Position for 5G and Wi-Fi 6

To fully appreciate Chairman Pai’s accomplishments, we must first fully appreciate the urgency of opening new spectrum, and the challenges Pai faced from within the Trump administration itself. While providers can (and should) repurpose spectrum from older technologies to newer technologies, successful widespread deployment can only take place when sufficient amounts of new spectrum become available. This “green field” spectrum allows providers to build out new technologies with the most up-to-date equipment without disrupting existing subscriber services. The protocols developed for mobile 5G services work best with “mid-band” spectrum (generally considered to be frequencies between 2 GHz and 6 GHz). At the time Pai became chairman, the FCC did not have any mid-band spectrum identified for auction.

In addition, spectrum available for unlicensed use has become increasingly congested as more and more services depend on Wi-Fi and other unlicensed applications. Indeed, we have become so dependent on Wi-Fi for home broadband and networking that people routinely talk about buying “Wi-Fi” from commercial broadband providers rather than buying “internet access.” The United States further suffered a serious disadvantage moving forward to next generation Wi-Fi, Wi-Fi 6, because the U.S. lacked a contiguous block of spectrum large enough to take advantage of Wi-Fi 6’s gigabit capabilities. Without gigabit Wi-Fi, Americans will increasingly be unable to use the applications that gigabit broadband to the home makes possible.

But virtually all spectrum—particularly mid-band spectrum—have significant incumbents. These incumbents include federal users, particularly the U.S. Department of Defense. Finding new spectrum optimal for 5G required reclaiming spectrum from these incumbents. Unlicensed services do not require relocating incumbent users but creating such “underlay” unlicensed spectrum access requires rules to prevent unlicensed operations from causing harmful interference to licensed services. Needless to say, incumbent services fiercely resist any change in spectrum-allocation rules, claiming that reducing their spectrum allocation or permitting unlicensed services will compromise valuable existing services, while simultaneously causing harmful interference.

The need to reallocate unprecedented amounts of spectrum to ensure successful 5G and Wi-Fi 6 deployment in the United States created an unholy alliance of powerful incumbents, commercial and federal, dedicated to blocking FCC action. Federal agencies—in violation of established federal spectrum policy—publicly challenged the FCC’s spectrum-allocation decisions. Powerful industry incumbents—such as the auto industry, the power industry, and defense contractors—aggressively lobbied Congress to reverse the FCC’s spectrum action by legislation. The National Telecommunications and Information Agency (NTIA), the federal agency tasked with formulating federal spectrum policy, was missing in action as it rotated among different acting agency heads. As the chair and ranking member of the House Commerce Committee noted, this unprecedented and very public opposition by federal agencies to FCC spectrum policy threatened U.S. wireless interests both domestically and internationally.

Navigating this hostile terrain required Pai to exercise both political acumen and political will. Pai accomplished his goal of reallocating 600 MHz of spectrum for auction, opening over 1200 MHz of contiguous spectrum for unlicensed use, and authorized the new entrant Ligado Networks over the objections of the DOD. He did so by a combination of persuading President Donald Trump of the importance of maintaining U.S. leadership in 5G, and insisting on impeccable analysis by the FCC’s engineers to provide support for the reallocation and underlay decisions. On the most significant votes, Pai secured support (or partial support) from the Democrats. Perhaps most importantly, Pai successfully defended the institutional role of the FCC as the ultimate decisionmaker on commercial spectrum use, not subject to a “heckler’s veto” by other federal agencies.

Missed Innovation, ‘Command and Control Lite

While acknowledging Pai’s accomplishments, a fair consideration of Pai’s legacy must also consider his shortcomings. As chairman, Pai proved the most conservative FCC chair on spectrum policy since the 1980s. The Reagan FCC produced unlicensed and spread spectrum rules. The Clinton FCC created the spectrum auction regime. The Bush FCC included a spectrum task force and produced the concept of database management for unlicensed services, creating the TVWS and laying the groundwork for CBRS in the 3.5 GHz band. The Obama FCC recommended and created the world’s first incentive auction.

The Trump FCC does more than lack comparable accomplishments; it actively rolled back previous innovations. Within the first year of his chairmanship, Pai began a rulemaking designed to roll back the innovative priority access licensing (PALs). Under the rules adopted under the previous chairman, PALs provided exclusive use on a census block basis for three years with no expectation of renewal. Pai delayed the rollout of CBRS for two years to replace this approach with a standard license structure of 10 years with an expectation of renewal, explicitly to facilitate traditional carrier investment in traditional networks. Pai followed the same path when restructuring the 2.5 GHz band. While laudably creating a window for Native Americans to apply for 2.5 GHz licenses on rural tribal lands, Pai rejected proposals from nonprofits to adopt a window for non-commercial providers to offer broadband. Instead, he simply eliminated the educational requirement and adopted a standard auction for distribution of remaining licenses.

Similarly, in the unlicensed space, Pai consistently declined to promote innovation. In the repacking following the broadcast incentive auction, Pai rejected the proposal of structuring the repacking to ensure usable TVWS in every market. Instead, under Pai, the FCC managed the repacking so as to minimize the burden on incumbent primary and secondary licensees. As a result, major markets such as Los Angeles have zero channels available for unlicensed TVWS operation. This effectively relegates the service to a niche rural service, augmenting existing rural wireless ISPs.

The result is a modified form of “command and control,” the now-discredited system where the FCC would allocate licenses to provide specific services such as “FM radio” or “mobile pager service.” While preserving license flexibility in name, the licensing rules are explicitly structured to promote certain types of investment and business cases. The result is to encourage the same types of licensees to offer improved and more powerful versions of the same types of services, while discouraging more radical innovations.

Conclusion

Chairman Pai can rightly take pride in his overall 5G legacy. He preserved the institutional role of the FCC as the agency responsible for expanding our nation’s access to wireless services against sustained attack by federal agencies determined to protect their own spectrum interests. He provided enough green field spectrum for both licensed services and unlicensed services to permit the successful deployment of 5G and Wi-Fi 6. At the same time, however, he failed to encourage more radical spectrum policies that have made the United States the birthplace of such technologies as mobile broadband and Wi-Fi. We have won the “race” to next generation wireless, but the players and services are likely to stay the same.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Randy May is president of the Free State Foundation.]

I am pleased to participate in this retrospective symposium regarding Ajit Pai’s tenure as Federal Communications Commission chairman. I have been closely involved in communications law and policy for nearly 45 years, and, as I’ve said several times since Chairman Pai announced his departure, he will leave as one of the most consequential leaders in the agency’s history. And, I should hastily add, consequential in a positive way, because it’s possible to be consequential in a not-so-positive way.

Chairman Pai’s leadership has been impactful in many different areas—for example, spectrum availability, media deregulation, and institutional reform, to name three—but in this tribute I will focus on his efforts regarding “net neutrality.” I use the quotes because the term has been used by many to mean many different things in many different contexts.

Within a year of becoming chairman, and with the support of fellow Republican commissioners Michael O’Rielly and Brendan Carr, Ajit Pai led the agency in reversing the public utility-like “net neutrality” regulation that had been imposed by the Obama FCC in February 2015 in what became known as the Title II Order. The Title II Order had classified internet service providers (ISPs) as “telecommunications carriers” subject to the same common-carrier regulatory regime imposed on monopolistic Ma Bell during most of the 20th century. While “forbearing” from imposing the full array of traditional common-carrier regulatory mandates, the Title II Order also subjected ISPs to sanctions if they violated an amorphous “general conduct standard,” which provided that ISPs could not “unreasonably” interfere with or disadvantage end users or edge providers like Google, Facebook, and the like.

The aptly styled Restoring Internet Freedom Order (RIF Order), adopted in December 2017, reversed nearly all of the Title II Order’s heavy-handed regulation of ISPs in favor of a light-touch regulatory regime. It was aptly named, because the RIF Order “restored” market “freedom” to internet access regulation that had mostly prevailed since the turn of the 21st century. It’s worth remembering that, in 1999, in opting not to require that newly emerging cable broadband providers be subjected to a public utility-style regime, Clinton-appointee FCC Chairman William Kennard declared: “[T]he alternative is to go to the telephone world…and just pick up this whole morass of regulation and dump it wholesale on the cable pipe. That is not good for America.” And worth recalling, too, that in 2002, the commission, under the leadership of Chairman Michael Powell, determined that “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.”

It was this reliance on market freedom that was “restored” under Ajit Pai’s leadership. In an appearance at a Free State Foundation event in December 2016, barely a month before becoming chairman, then-Commissioner Pai declared: “It is time to fire up the weed whacker and remove those rules that are holding back investment, innovation, and job creation.” And he added: “Proof of market failure should guide the next commission’s consideration of new regulations.” True to his word, the weed whacker was used to cut down the public utility regime imposed on ISPs by his predecessor. And the lack of proof of any demonstrable market failure was at the core of the RIF Order’s reasoning.

It is true that, as a matter of law, the D.C. Circuit’s affirmance of the Restoring Internet Freedom Order in Mozilla v. FCC rested heavily on the application by the court of Chevron deference, just as it is true that Chevron deference played a central role in the affirmance of the Title II Order and the Brand X decision before that. And it would be disingenuous to suggest that, if a newly reconstituted Biden FCC reinstitutes a public utility-like regulatory regime for ISPs, that Chevron deference won’t once again play a central role in the appeal.

But optimist that I am, and focusing not on what possibly may be done as a matter of law, but on what ought to be done as a matter of policy, the “new” FCC should leave in place the RIF Order’s light-touch regulatory regime. In affirming most of the RIF Order in Mozilla, the D.C. Circuit agreed there was substantial evidence supporting the commission’s predictive judgment that reclassification of ISPs “away from public-utility style regulation” was “likely to increase ISP investment and output.” And the court agreed there was substantial evidence to support the commission’s position that such regulation is especially inapt for “a dynamic industry built on technological development and disruption.”

Indeed, the evidence has only become more substantial since the RIF Order’s adoption. Here are only a few factual snippets: According to CTIA, wireless-industry investment for 2019 grew to $29.1 billion, up from $27.4 billion in 2018 and $25.6 billion in 2017USTelecom estimates that wireline broadband ISPs invested approximately $80 billion in network infrastructure in 2018, up more than $3.1 billion from $76.9 billion in 2017. And total investment most likely increased in 2019 for wireline ISPs like it did for wireless ISPs. Figures cited in the FCC’s 2020 Broadband Deployment Report indicate that fiber broadband networks reached an additional 6.5 million homes in 2019, a 16% increase over the prior year and the largest single-year increase ever

Additionally, more Americans have access to broadband internet access services, and at ever higher speeds. According to an April 2020 report by USTelecom, for example, gigabit internet service is available to at least 85% of U.S. homes, compared to only 6% of U.S. homes three-and-a-half years ago. In an October 2020 blog post, Chairman Pai observed that “average download speeds for fixed broadband in the United States have doubled, increasing by over 99%” since the RIF Order was adopted. Ookla Speedtests similarly show significant gains in mobile wireless speeds, climbing to 47/10 Mbps in September 2020 compared to 27/8 Mbps in the first half of 2018.

More evidentiary support could be offered regarding the positive results that followed adoption of the RIF Order, and I assume in the coming year it will be. But the import of abandonment of public utility-like regulation of ISPs should be clear.

There is certainly much that Ajit Pai, the first-generation son of immigrants who came to America seeking opportunity in the freedom it offered, accomplished during his tenure. To my way of thinking, “Restoring Internet Freedom” ranks at—or at least near—the top of the list.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

U.S. antitrust regulators have a history of narrowly defining relevant markets—often to the point of absurdity—in order to create market power out of thin air. The Federal Trade Commission (FTC) famously declared that Whole Foods and Wild Oats operated in the “premium natural and organic supermarkets market”—a narrowly defined market designed to exclude other supermarkets carrying premium natural and organic foods, such as Walmart and Kroger. Similarly, for the Staples-Office Depot merger, the FTC

narrowly defined the relevant market as “office superstore” chains, which excluded general merchandisers such as Walmart, K-Mart and Target, who at the time accounted for 80% of office supply sales.

Texas Attorney General Ken Paxton’s complaint against Google’s advertising business, joined by the attorneys general of nine other states, continues this tradition of narrowing market definition to shoehorn market dominance where it may not exist.

For example, one recent paper critical of Google’s advertising business narrows the relevant market first from media advertising to digital advertising, then to the “open” supply of display ads and, finally, even further to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the authors conclude Google’s market share is “perhaps sufficient to confer market power.”

While whittling down market definitions may achieve the authors’ purpose of providing a roadmap to prosecute Google, one byproduct is a mishmash of market definitions that generates as many as 16 relevant markets for digital display and video advertising, in many of which Google doesn’t have anything approaching market power (and in some of which, in fact, Facebook, and not Google, is the most dominant player).

The Texas complaint engages in similar relevant-market gerrymandering. It claims that, within digital advertising, there exist several relevant markets and that Google monopolizes four of them:

  1. Publisher ad servers, which manage the inventory of a publisher’s (e.g., a newspaper’s website or a blog) space for ads;
  2. Display ad exchanges, the “marketplace” in which auctions directly match publishers’ selling of ad space with advertisers’ buying of ad space;
  3. Display ad networks, which are similar to exchanges, except a network acts as an intermediary that collects ad inventory from publishers and sells it to advertisers; and
  4. Display ad-buying tools, which include demand-side platforms that collect bids for ad placement with publishers.

The complaint alleges, “For online publishers and advertisers alike, the different online advertising formats are not interchangeable.” But this glosses over a bigger challenge for the attorneys general: Is online advertising a separate relevant market from offline advertising?

Digital advertising, of which display advertising is a small part, is only one of many channels through which companies market their products. About half of today’s advertising spending in the United States goes to digital channels, up from about 10% a decade ago. Approximately 30% of ad spending goes to television, with the remainder going to radio, newspapers, magazines, billboards and other “offline” forms of media.

Physical newspapers now account for less than 10% of total advertising spending. Traditionally, newspapers obtained substantial advertising revenues from classified ads. As internet usage increased, newspaper classifieds have been replaced by less costly and more effective internet classifieds—such as those offered by Craigslist—or targeted ads on Google Maps or Facebook.

The price of advertising has fallen steadily over the past decade, while output has risen. Spending on digital advertising in the United States grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period, the producer price index (PPI) for internet advertising sales declined by nearly 40%. Rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year.

Since 2000, advertising spending has been falling as a share of gross domestic product, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost and increasing total revenues are consistent with a growing and increasingly competitive market, rather than one of rising concentration and reduced competition.

There is little or no empirical data evaluating the extent to which online and offline advertising constitute distinct markets or the extent to which digital display is a distinct submarket of online advertising. As a result, analysis of adtech competition has relied on identifying several technical and technological factors—as well as the say-so of participants in the business—that the analysts assert distinguish online from offline and establish digital display (versus digital search) as a distinct submarket. This approach has been used and accepted, especially in cases in which pricing data has not been available.

But the pricing information that is available raises questions about the extent to which online advertising is a distinct market from offline advertising. For example, Avi Goldfarb and Catherine Tucker find that, when local regulations prohibit offline direct advertising, search advertising is more expensive, indicating that search and offline advertising are substitutes. In other research, they report that online display advertising circumvents, in part, local bans on offline billboard advertising for alcoholic beverages. In both studies, Goldfarb and Tucker conclude their results suggest online and offline advertising are substitutes. They also conclude this substitution suggests that online and offline markets should be considered together in the context of antitrust.

While this information is not sufficient to define a broader relevant market, it raises questions regarding solely relying on the technical or technological distinctions and the say-so of market participants.

In the United States, plaintiffs do not get to define the relevant market. That is up to the judge or the jury. Plaintiffs have the burden to convince the court that a proposed narrow market definition is the correct one. With strong evidence that online and offline ads are substitutes, the court should not blindly accept the gerrymandered market definitions posited by the attorneys general.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

It is my endeavor to scrutinize the questionable assessment articulated against default settings in the U.S. Justice Department’s lawsuit against Google. Default, I will argue, is no antitrust fault. Default in the Google case drastically differs from default referred to in the Microsoft case. In Part I, I argue the comparison is odious. Furthermore, in Part II, it will be argued that the implicit prohibition of default settings echoes, as per listings, the explicit prohibition of self-preferencing in search results. Both aspects – default’s implicit prohibition and self-preferencing’s explicit prohibition – are the two legs of a novel and integrated theory of sanctioning corporate favoritism. The coming to the fore of such theory goes against the very essence of the capitalist grain. In Part III, I note the attempt to instill some corporate selflessness is at odds with competition on the merits and the spirit of fundamental economic freedoms.

When Default is No-Fault

The recent complaint filed by the DOJ and 11 state attorneys general claims that Google has abused its dominant position on the search-engine market through several ways, notably making Google the default search engine both in Google Chrome web browser for Android OS and in Apple’s Safari web browser for iOS. Undoubtedly, default setting confers a noticeable advantage for users’ attraction – it is sought and enforced on purpose. Nevertheless, the default setting confers an unassailable position unless the product remains competitive. Furthermore, the default setting can hardly be proven to be anticompetitive in the Google case. Indeed, the DOJ puts considerable effort in the complaint to make the Google case resemble the 20-year-old Microsoft case. Former Federal Trade Commission Chairman William Kovacic commented: “I suppose the Justice Department is telling the court, ‘You do not have to be scared of this case. You’ve done it before […] This is Microsoft part 2.”[1]

However, irrespective of the merits of the Microsoft case two decades ago, the Google default setting case bears minimal resemblance to the Microsoft default setting of Internet Explorer. First, as opposed to the Microsoft case, where default by Microsoft meant pre-installed software (i.e., Internet Explorer)[2], the Google case does not relate to the pre-installment of the Google search engine (since it is just a webpage) but a simple setting. This technical difference is significant: although “sticky”[3], the default setting, can be outwitted with just one click[4]. It is dissimilar to the default setting, which can only be circumvented by uninstalling software[5], searching and installing a new one[6]. Moreover, with no certainty that consumers will effectively use Google search engine, default settings come with advertising revenue sharing agreements between Google and device manufacturers, mobile phone carriers, competing browsers and Apple[7]. These mutually beneficial deals represent a significant cost with no technical exclusivity [8]. In other words, the antitrust treatment of a tie-in between software and hardware in the Microsoft case cannot be convincingly extrapolated to the default setting of a “webware”[9] as relevant in the Google case.

Second, the Google case cannot legitimately resort to extrapolating the Microsoft case for another technical (and commercial) aspect: the Microsoft case was a classic tie-in case where the tied product (Internet Explorer) was tied into the main product (Windows). As a traditional tie-in scenario, the tied product (Internet Explorer) was “consistently offered, promoted, and distributed […] as a stand-alone product separate from, and not as a component of, Windows […]”[10]. In contrast, Google has never sold Google Chrome or Android OS. It offered both Google Chrome and Android OS for free, necessarily conditional to Google search engine as default setting. The very fact that Google Chrome or Android OS have never been “stand-alone” products, to use the Microsoft case’s language, together with the absence of software installation, dramatically differentiates the features pertaining to the Google case from those of the Microsoft case. The Google case is not a traditional tie-in case: it is a case against default setting when both products (the primary and related products) are given for free, are not saleable, are neither tangible nor intangible goods but only popular digital services due to significant innovativeness and ease of usage. The Microsoft “complaint challenge[d] only Microsoft’s concerted attempts to maintain its monopoly in operating systems and to achieve dominance in other markets, not by innovation and other competition on the merits, but by tie-ins.” Quite noticeably, the Google case does not mention tie-in ,as per Google Chrome or Android OS.

The complaint only refers to tie-ins concerning Google’s app being pre-installed on Android OS. Therefore, concerning Google’s dominance on the search engine market, it cannot be said that the default setting of Google search in Android OS entails tie-in. Google search engine has no distribution channel (since it is only a website) other than through downstream partnerships (i.e., vertical deals with Android device manufacturers). To sanction default setting on downstream trading partners is tantamount to refusing legitimate means to secure distribution channels of proprietary and zero-priced services. To further this detrimental logic, it would mean that Apple may no longer offer its own apps in its own iPhones or, in offline markets, that a retailer may no longer offer its own (default) bags at the till since it excludes rivals’ sale bags. Products and services naked of any adjacent products and markets (i.e., an iPhone or Android OS with no app or a shopkeeper with no bundled services) would dramatically increase consumers’ search costs while destroying innovators’ essential distribution channels for innovative business models and providing few departures from the status quo as long as consumers will continue to value default products[11].

Default should not be an antitrust fault: the Google case makes default settings a new line of antitrust injury absent tie-ins. In conclusion, as a free webware, Google search’s default setting cannot be compared to default installation in the Microsoft case since minimal consumer stickiness entails (almost) no switching costs. As free software, Google’s default apps cannot be compared to Microsoft case either since pre-installation is the sine qua non condition of the highly valued services (Android OS) voluntarily chosen by device manufacturers. Default settings on downstream products can only be reasonably considered as antitrust injury when the dominant company is erroneously treated as a de facto essential facility – something evidenced by the similar prohibition of self-preferencing.

When Self-Preference is No Defense

Self-preferencing is to listings what the default setting is to operating systems. They both are ways to market one’s own products (i.e., alternative to marketing toward end-consumers). While default setting may come with both free products and financial payments (Android OS and advertising revenue sharing), self-preferencing may come with foregone advertising revenues in order to promote one’s own products. Both sides can be apprehended as the two sides of the same coin:[12] generating the ad-funded main product’s distribution channels – Google’s search engine. Both are complex advertising channels since both venues favor one’s own products regarding consumers’ attention. Absent both channels, the payments made for default agreements and the foregone advertising revenues in self-preferencing one’s own products would morph into marketing and advertising expenses of Google search engine toward end-consumers.

The DOJ complaint lambasts that “Google’s monopoly in general search services also has given the company extraordinary power as the gateway to the internet, which uses to promote its own web content and increase its profits.” This blame was at the core of the European Commission’s Google Shopping decision in 2017[13]: it essentially holds Google accountable for having, because of its ad-funded business model, promoted its own advertising products and demoted organic links in search results. According to which Google’s search results are no longer relevant and listed on the sole motivation of advertising revenue

But this argument is circular: should these search results become irrelevant, Google’s core business would become less attractive, thereby generating less advertising revenue. This self-inflicted inefficiency would deprive Google of valuable advertising streams and incentivize end-consumers to switch to search engine rivals such as Bing, DuckDuckGo, Amazon (product search), etc. Therefore, an ad-funded company such as Google needs to reasonably arbitrage between advertising objectives and the efficiency of its core activities (here, zero-priced organic search services). To downplay (the ad-funded) self-referencing in order to foster (the zero-priced) organic search quality would disregard the two-sidedness of the Google platform: it would harm advertisers and the viability of the ad-funded business model without providing consumers and innovation protection it aims at providing. The problematic and undesirable concept of “search neutrality” would mean algorithmic micro-management for the sake of an “objective” listing considered acceptable only to the eyes of the regulator.

Furthermore, self-preferencing entails a sort of positive discrimination toward one’s own products[14]. If discrimination has traditionally been antitrust lines of injuries, self-preferencing is an “epithet”[15] outside antitrust remits for good reasons[16]. Indeed, should self-interested (i.e., rationally minded) companies and individuals are legally complied to self-demote their own products and services? If only big (how big?) companies are legally complied to self-demote their products and services, to what extent will exempted companies involved in self-preferencing become liable to do so?

Indeed, many uncertainties, legal and economic ones, may spawn from the emerging prohibition of self-preferencing. More fundamentally, antitrust liability may clash with basic corporate governance principles where self-interestedness allows self-preferencing and command such self-promotion. The limits of antitrust have been reached when two sets of legal regimes, both applicable to companies, suggest contradictory commercial conducts. To what extent may Amazon no longer promote its own series on Amazon Video in a similar manner Netflix does? To what extent can Microsoft no longer promote Bing’s search engine to compete with Google’s search engine effectively? To what extent Uber may no longer promote UberEATS in order to compete with delivery services effectively? Not only the business of business is doing business[17], but also it is its duty for which shareholders may hold managers to account.

The self is moral; there is a corporate morality of business self-interest. In other words, corporate selflessness runs counter to business ethics since corporate self-interest yields the self’s rivalrous positioning within a competitive order. Absent a corporate self-interest, self-sacrifice may generate value destruction for the sake of some unjustified and ungrounded claims. The emerging prohibition of self-preferencing, similar to the established ban on the default setting on one’s own products into other proprietary products, materializes the corporate self’s losing. Both directions coalesce to instill the legally embedded duty of self-sacrifice for the competitor’s welfare instead of the traditional consumer welfare and the dynamics of innovation, which never unleash absent appropriabilities. In conclusion, to expect firms, however big or small, to act irrespective of their identities (i.e., corporate selflessness) would constitute an antitrust error and would be at odds with capitalism.

Toward an Integrated Theory of Disintegrating Favoritism

The Google lawsuit primarily blames Google for default settings enforced via several deals. The lawsuit also makes self-preferencing anticompetitive conduct under antitrust rules. These two charges are novel and dubious in their remits. They nevertheless represent a fundamental catalyst for the development of a new and problematic unified antitrust theory prohibiting favoritism:  companies may no longer favor their products and services, both vertically and horizontally, irrespective of consumer benefits, irrespective of superior efficiency arguments, and irrespective of dynamic capabilities enhancement. Indeed, via an unreasonably expanded vision of leveraging, antitrust enforcement is furtively banning a company to favor its own products and services based on greater consumer choice as a substitute to consumer welfare, based on the protection of the opportunities of rivals to innovate and compete as a substitute to the essence of competition and innovation, and based on limiting the outreach and size of companies as a substitute to the capabilities and efficiencies of these companies. Leveraging becomes suspicious and corporate self-favoritism under accusation. The Google lawsuit materializes this impractical trend, which further enshrines the precautionary approach to antitrust enforcement[18].


[1] Jessica Guynn, Google Justice Department antitrust lawsuit explained: this is what it means for you. USA Today, October 20, 2020.

[2] The software (Internet Explorer) was tied in the hardware (Windows PC).

[3] U.S. v Google LLC, Case A:20, October 20, 2020, 3 (referring to default settings as “especially sticky” with respect to consumers’ willingness to change).

[4] While the DOJ affirms that “being the preset default general search engine is particularly valuable because consumers rarely change the preset default”, it nevertheless provides no evidence of the breadth of such consumer stickiness. To be sure, search engine’s default status does not necessarily lead to usage as evidenced by the case of South Korea. In this country, despite Google’s preset default settings, the search engine Naver remains dominant in the national search market with over 70% of market shares. The rivalry exerted by Naver on Google demonstrates that limits of consumer stickiness to default settings. See Alesia Krush, Google vs. Naver: Why Can’t Google Dominate Search in Korea? Link-Assistant.Com, available at: https://www.link-assistant.com/blog/google-vs-naver-why-cant-google-dominate-search-in-korea/ . As dominant search engine in Korea, Naver is subject to antitrust investigations with similar leveraging practices as Google in other countries, see Shin Ji-hye, FTC sets up special to probe Naver, Google, The Korea Herald, November 19, 2019, available at :  http://www.koreaherald.com/view.php?ud=20191119000798 ; Kim Byung-wook, Complaint against Google to be filed with FTC, The Investor, December 14, 2020, available at : https://www.theinvestor.co.kr/view.php?ud=20201123000984  (reporting a complaint by Naver and other Korean IT companies against Google’s 30% commission policy on Google Play Store’s apps).

[5] For instance, the then complaint acknowledged that “Microsoft designed Windows 98 so that removal of Internet Explorer by OEMs or end users is operationally more difficult than it was in Windows 95”, in U.S. v Microsoft Corp., Civil Action No 98-1232, May 18, 1998, para.20.

[6] The DOJ complaint itself quotes “one search competitor who is reported to have noted consumer stickiness “despite the simplicity of changing a default setting to enable customer choice […]” (para.47). Therefore, default setting for search engine is remarkably simple to bypass but consumers do not often do so, either due to satisfaction with Google search engine and/or due to search and opportunity costs.

[7] See para.56 of the DOJ complaint.

[8] Competing browsers can always welcome rival search engines and competing search engine apps can always be downloaded despite revenue sharing agreements. See paras.78-87 of the DOJ complaint.

[9] Google search engine is nothing but a “webware” – a complex set of algorithms that work via online access of a webpage with no prior download. For a discussion on the definition of webware, see https://www.techopedia.com/definition/4933/webware .

[10] Id. para.21.

[11] Such outcome would frustrate traditional ways of offering computers and mobile devices as acknowledged by the DOJ itself in the Google complaint: “new computers and new mobile devices generally come with a number of preinstalled apps and out-of-the-box setting. […] Each of these search access points can and almost always does have a preset default general search engine”, at para. 41. Also, it appears that present default general search engine is common commercial practices since, as the DOJ complaint itself notes when discussing Google’s rivals (Microsoft’s Bing and Amazon’s Fire OS), “Amazon preinstalled its own proprietary apps and agreed to make Microsoft’s Bing the preset default general search engine”, in para.130. The complaint fails to identify alternative search engines which are not preset defaults, thus implicitly recognizing this practice as a widespread practice.

[12] To use Vesterdof’s language, see Bo Vesterdorf, Theories of Self-Preferencing and Duty to Deal – Two Sides of the Same Coin, Competition Law & Policy Debate 1(1) 4, (2015). See also Nicolas Petit, Theories of Self-Preferencing under Article 102 TFEU: A Reply to Bo Vesterdorf, 5-7 (2015).

[13] Case 39740 Google Search (Shopping). Here the foreclosure effects of self-preferencing are only speculated: « the Commission is not required to prove that the Conduct has the actual effect of decreasing traffic to competing comparison shopping services and increasing traffic to Google’s comparison-shopping service. Rather, it is sufficient for the Commission to demonstrate that the Conduct is capable of having, or likely to have, such effects.” (para.601 of the Decision). See P. Ibáñez Colomo, Indispensability and Abuse of Dominance: From Commercial Solvents to Slovak Telekom and Google Shopping, 10 Journal of European Competition Law & Practice 532 (2019); Aurelien Portuese, When Demotion is Competition: Algorithmic Antitrust Illustrated, Concurrences, no 2, May 2018, 25-37; Aurelien Portuese, Fine is Only One Click Away, Symposium on the Google Shopping Decision, Case Note, 3 Competition and Regulatory Law Review, (2017).

[14] For a general discussion on law and economics of self-preferencing, see Michael A. Salinger, Self-Preferencing, Global Antitrust Institute Report, 329-368 (2020).

[15]Pablo Ibanez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition (2020) (concluding that self-preferencing is « misleading as a legal category »).

[16] See, for instances, Pedro Caro de Sousa, What Shall We Do About Self-Preferencing? Competition Policy International, June 2020.

[17] Milton Friedman, The Social Responsibility of Business is to Increase Its Profits, New York Times, September 13, 1970. This echoes Adam Smith’s famous statement that « It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard for their own self-interest » from the 1776 Wealth of Nations. In Ayn Rand’s philosophy, the only alternative to rational self-interest is to sacrifice one’s own interests either for fellowmen (altruism) or for supernatural forces (mysticism). See Ayn Rand, The Objectivist Ethics, in The Virtue of Selfishness, Signet, (1964).

[18] Aurelien Portuese, European Competition Enforcement and the Digital Economy : The Birthplace of Precautionary Antitrust, Global Antitrust Institute’s Report on the Digital Economy, 597-651.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.

Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.

What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.

That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.

But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.

That was the economy of scale.

The Structure of the Google Case

Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.

Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.

The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.

Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.

The problem of anticompetitive conduct is only slightly more difficult.

Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.

(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)

It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.

Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.

The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.

That leaves consumer harm. And here is where things get iffy.

Usage as a Product Improvement: A Very Convenient Argument

The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.

But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?

If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.

So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?

Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.

And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.

None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.

This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.

The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.

It turns out that has happened before.

Economies of Scale as a Product Improvement: Once a Convenient Argument

Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.

The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.

Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.

But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.

If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.

For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.

It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.

The Monopolistic Competition Revolution

The revolution came in the form of the theory of monopolistic competition and its cousin, the theory of creative destruction, developed between the 1920s and 1940s by Edward Chamberlin, Joan Robinson and Joseph Schumpeter.

These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.

From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.

The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.

What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.

This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.

Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.

Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.

It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.

Usage-Based Improvements Are Not Like Economies of Scale

The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.

But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.

Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?

Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.

Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.

A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.

To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.

If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.

And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.

But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.

Americans Keep Voting to Centralize the Internet

In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.

For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.

But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.

The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.

But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.

Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.

But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.

Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.

The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.

And that Google should win on consumer harm.

Remember the Telephone

Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.

The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.

Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.

Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.

Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.

The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.

The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.

The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.

Microsoft Not to the Contrary, Because Users Were in Common

Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.

As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.

That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.

The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.

The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.

This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.

It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.

The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.

That means that, unlike Google and rival search engines, Windows and Netscape shared users.

So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.

By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.

Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.

Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.

And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Google is facing a series of lawsuits in 2020 and 2021 that challenge some of the most fundamental parts of its business, and of the internet itself — Search, Android, Chrome, Google’s digital-advertising business, and potentially other services as well. 

The U.S. Justice Department (DOJ) has brought a case alleging that Google’s deals with Android smartphone manufacturers, Apple, and third-party browsers to make Google Search their default general search engine are anticompetitive (ICLE’s tl;dr on the case is here), and the State of Texas has brought a suit against Google’s display advertising business. These follow a market study by the United K’s Competition and Markets Authority that recommended an ex ante regulator and code of conduct for Google and Facebook. At least one more suit is expected to follow.

These lawsuits will test ideas that are at the heart of modern antitrust debates: the roles of defaults and exclusivity deals in competition; the costs of self-preferencing and its benefits to competition; the role of data in improving software and advertising, and its role as a potential barrier to entry; and potential remedies in these markets and their limitations.

This Truth on the Market symposium asks contributors with wide-ranging viewpoints to comment on some of these issues as they arise in the lawsuits being brought—starting with the U.S. Justice Department’s case against Google for alleged anticompetitive practices in search distribution and search-advertising markets—and continuing throughout the duration of the lawsuits.