Archives For telecommunications

Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.

Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.

The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.

In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Times reports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.

One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:

That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.

It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.

“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”

This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.

Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?

It’s not a simple question to answer.

In comments to the FCC, the International Center for Law & Economics (ICLE) notes:

The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.

This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:

A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.

To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.

Federal Trade Commission (FTC) Chair Lina Khan recently joined with FTC Commissioner Rebecca Slaughter to file a “written submission on the public interest” in the U.S. International Trade Commission (ITC) Section 337 proceeding concerning imports of certain cellular-telecommunications equipment covered by standard essential patents (SEPs). SEPs are patents that “read on” technology adopted for inclusion in a standard. Regrettably, the commissioners’ filing embodies advice that, if followed, would effectively preclude Section 337 relief to SEP holders. Such a result would substantially reduce the value of U.S. SEPs and thereby discourage investments in standards that help drive American innovation.

Section 337 of the Tariff Act authorizes the ITC to issue “exclusion orders” blocking the importation of products that infringe U.S. patents, subject to certain “public interest” exceptions. Specifically, before issuing an exclusion order, the ITC must consider:

  1. the public health and welfare;
  2. competitive conditions in the U.S. economy;
  3. production of like or directly competitive articles in the United States; and
  4. U.S. consumers.

The Khan-Slaughter filing urges the ITC to consider the impact that issuing an exclusion order against a willing licensee implementing a standard would have on competition and consumers in the United States. The filing concludes that “where a complainant seeks to license and can be made whole through remedies in a different U.S. forum [a federal district court], an exclusion order barring standardized products from the United States will harm consumers and other market participants without providing commensurate benefits.”

Khan and Slaughter’s filing takes a one-dimensional view of the competitive effects of SEP rights. In short, it emphasizes that:

  1. standardization empowers SEP owners to “hold up” licensees by demanding more for a technology than it would have been worth, absent the standard;
  2. “hold ups” lead to higher prices and may discourage standard-setting activities and collaboration, which can delay innovation;
  3. many standard-setting organizations require FRAND (fair, reasonable, and non-discriminatory) licensing commitments from SEP holders to preclude hold-up and encourage standards adoption;
  4. FRAND commitments ensure that SEP licenses will be available at rates limited to the SEP’s “true” value;
  5. the threat of ITC exclusion orders would empower SEP holders to coerce licensees into paying “anticompetitively high” supra-FRAND licensing rates, discouraging investments in standard-compliant products;
  6. inappropriate exclusion orders harm consumers in the short term by depriving them of desired products and, in the longer run, through reduced innovation, competition, quality, and choice;
  7. thus, where the standard implementer is a “willing licensee,” an exclusion order would be contrary to the public interest; and
  8. as a general matter, exclusionary relief is incongruent and against the public interest where a court has been asked to resolve FRAND terms and can make the SEP holder whole.

In essence, Khan and Slaughter recite a parade of theoretical horribles, centered on anticompetitive hold-ups, to call-for denying exclusion orders to SEP owners on public-interest grounds. Their filing’s analysis, however, fails as a matter of empirics, law, and sound economics. 

First, the filing fails to note that there is a lack of empirical support for anticompetitive hold-up being a problem at all (see, for example, here, here, and here). Indeed, a far more serious threat is “hold-out,” whereby the ability of implementers to infringe SEPs without facing serious consequences leads to an inefficient undervaluation of SEP rights (see, for example, here). (At worst, implementers will have to pay at some future time a “reasonable” licensing fee if held to be infringers in court, since U.S. case law (unlike foreign case law) has essentially eliminated SEP holders’ ability to obtain an injunction.)  

Second, as a legal matter, the filing’s logic would undercut the central statutory purpose of Section 337, which is to provide all U.S. patent holders a right to exclude infringing imports. Section 337 does not distinguish between SEPs and other patents—all are entitled to full statutory protection. Former ITC Chair Deanna Tanner Okun, in critiquing a draft administration policy statement that would severely curtail the rights of SEP holders, assessed the denigration of Section 337 statutory protections in a manner that is equally applicable to the Khan-Slaughter filing:

The Draft Policy Statement also circumvents Congress by upending the statutory framework and purpose of Section 337, which includes the ITC’s practice of evaluating all unfair acts equally. Although the draft disclaims any “unique set of legal rules for SEPs,” it does, in fact, create a special and unequal analysis for SEPs. The draft also implies that the ITC should focus on whether the patents asserted are SEPs when judging whether an exclusion order would adversely affect the public interest. The draft fundamentally misunderstands the ITC’s purpose, statutory mandates, and overriding consideration of safeguarding the U.S. public interest and would — again, without statutory approval — elevate SEP status of a single patent over other weighty public interest considerations. The draft also overlooks Presidential review requirements, agency consultation opportunities and the ITC’s ability to issue no remedies at all.

[Notable,] Section 337’s statutory language does not distinguish the types of relief available to patentees when SEPs are asserted.

Third, Khan and Slaughter not only assert theoretical competitive harms from hold-ups that have not been shown to exist (while ignoring the far more real threat of hold-out), they also ignore the foregone dynamic economic gains that would stem from limitations on SEP rights (see, generally, here). Denying SEP holders the right to obtain a Section 337 exclusion order, as advocated by the filing, deprives them of a key property right. It thereby establishes an SEP “liability rule” (SEP holder relegated to seeking damages), as opposed to a “property rule” (SEP holder may seek injunctive relief) as the SEP holder’s sole means to obtain recompense for patent infringement. As my colleague Andrew Mercado and I have explained, a liability-rule approach denies society the substantial economic benefits achievable through an SEP property rule:

[U]nder a property rule, as contrasted to a liability rule, innovation will rise and drive an increase in social surplus, to the benefit of innovators, implementers, and consumers. 

Innovators’ welfare will rise. … First, innovators already in the market will be able to receive higher licensing fees due to their improved negotiating position. Second, new innovators enticed into the market by the “demonstration effect” of incumbent innovators’ success will in turn engage in profitable R&D (to them) that brings forth new cycles of innovation.

Implementers will experience welfare gains as the flood of new innovations enhances their commercial opportunities. New technologies will enable implementers to expand their product offerings and decrease their marginal cost of production. Additionally, new implementers will enter the market as innovation accelerates. Seeing the opportunity to earn high returns, new implementers will be willing to pay innovators a high licensing fee in order to produce novel and improved products.

Finally, consumers will benefit from expanded product offerings and lower quality-adjusted prices. Initial high prices for new goods and services entering the market will fall as companies compete for customers and scale economies are realized. As such, more consumers will have access to new and better products, raising consumers’ surplus.

In conclusion, the ITC should accord zero weight to Khan and Slaughter’s fundamentally flawed filing in determining whether ITC exclusion orders should be available to SEP holders. Denying SEP holders a statutorily provided right to exclude would tend to undermine the value of their property, diminish investment in improved standards, reduce innovation, and ultimately harm consumers—all to the detriment, not the benefit, of the public interest.  

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Daniel Lyons is a professor of law at Boston College Law School and a visiting fellow at the American Enterprise Institute.]

For many, the chairmanship of Ajit Pai is notable for its many headline-grabbing substantive achievements, including the Restoring Internet Freedom order, 5G deployment, and rural buildout—many of which have been or will be discussed in this symposium. But that conversation is incomplete without also acknowledging Pai’s careful attention to the basic blocking and tackling of running a telecom agency. The last four years at the Federal Communications Commission were marked by small but significant improvements in how the commission functions, and few are more important than the chairman’s commitment to transparency.

Draft Orders: The Dark Ages Before 2017

This commitment is most notable in Pai’s revisions to the open meeting process. From time immemorial, the FCC chairman would set the agenda for the agency’s monthly meeting by circulating draft orders to the other commissioners three weeks in advance. But the public was deliberately excluded from that distribution list. During this period, the commissioners would read proposals, negotiate revisions behind the scenes, then meet publicly to vote on final agency action. But only after the meeting—often several days later—would the actual text of the order be made public.

The opacity of this process had several adverse consequences. Most obviously, the public lacked details about the substance of the commission’s deliberations. The Government in the Sunshine Act requires the agency’s meetings to be made public so the American people know what their government is doing. But without the text of the orders under consideration, the public had only a superficial understanding of what was happening each month. The process was reminiscent of House Speaker Nancy Pelosi’s famous gaffe that Congress needed to “pass the [Affordable Care Act] bill so that you can find out what’s in it.” During the high-profile deliberations over the Open Internet Order in 2015, then-Commissioner Pai made significant hay over this secrecy, repeatedly posting pictures of himself with the 300-plus-page order on Twitter with captions such as “I wish the public could see what’s inside” and “the public still can’t see it.”

Other consequences were less apparent, but more detrimental. Because the public lacked detail about key initiatives, the telecom media cycle could be manipulated by strategic leaks designed to shape the final vote. As then-Commissioner Pai testified to Congress in 2016:

[T]he public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Sometimes, this secrecy backfired on the chairman, such as when net-neutrality advocates used media pressure to shape the 2014 Open Internet NPRM. Then-Chairman Tom Wheeler’s proposed order sought to follow the roadmap laid out by the D.C. Circuit’s Verizon decision, which relied on Title I to prevent ISPs from blocking content or acting in a “commercially unreasonable manner.” Proponents of a more aggressive Title II approach leaked these details to the media in a negative light, prompting tech journalists and advocates to unleash a wave of criticism alleging the chairman was “killing off net neutrality to…let the big broadband providers double charge.” In full damage control mode, Wheeler attempted to “set the record straight” about “a great deal of misinformation that has recently surfaced regarding” the draft order. But the tempest created by these leaks continued, pressuring Wheeler into adding a Title II option to the NPRM—which, of course, became the basis of the 2015 final rule.

This secrecy also harmed agency bipartisanship, as minority commissioners sometimes felt as much in the dark as the general public. As Wheeler scrambled to address Title II advocates’ concerns, he reportedly shared revised drafts with fellow Democrats but did not circulate the final draft to Republicans until less than 48 hours before the vote—leading Pai to remark cheekily that “when it comes to the Chairman’s latest net neutrality proposal, the Democratic Commissioners are in the fast lane and the Republican Commissioners apparently are being throttled.” Similarly, Pai complained during the 2014 spectrum screen proceeding that “I was not provided a final version of the item until 11:50 p.m. the night before the vote and it was a substantially different document with substantively revised reasoning than the one that was previously circulated.”

Letting the Sunshine In

Eliminating this culture of secrecy was one of Pai’s first decisions as chairman. Less than a month after assuming the reins at the agency, he announced that the FCC would publish all draft items at the same time they are circulated to commissioners, typically three weeks before each monthly meeting. While this move was largely applauded, some were concerned that this transparency would hamper the agency’s operations. One critic suggested that pre-meeting publication would hamper negotiations among commissioners: “Usually, drafts created negotiating room…Now the chairman’s negotiating position looks like a final position, which undercuts negotiating ability.” Another, while supportive of the change, was concerned that the need to put a draft order in final form well before a meeting might add “a month or more to the FCC’s rulemaking adoption process.”

Fortunately, these concerns proved to be unfounded. The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan—compared to 33% and 69.9%, respectively, under Chairman Wheeler. 

This increased transparency also improved the overall quality of the agency’s work product. In a 2018 speech before the Free State Foundation, Commissioner Mike O’Rielly explained that “drafts are now more complete and more polished prior to the public reveal, so edits prior to the meeting are coming from Commissioners, as opposed to there being last minute changes—or rewrites—from staff or the Office of General Counsel.” Publishing draft orders in advance allows the public to flag potential issues for revision before the meeting, which improves the quality of the final draft and reduces the risk of successful post-meeting challenges via motions for reconsideration or petitions for judicial review. O’Rielly went on to note that the agency seemed to be running more efficiently as well, as “[m]eetings are targeted to specific issues, unnecessary discussions of non-existent issues have been eliminated, [and] conversations are more productive.”

Other Reforms

While pre-meeting publication was the most visible improvement to agency transparency, there are other initiatives also worth mentioning.

  • Limiting Editorial Privileges: Chairman Pai dramatically limited “editorial privileges,” a longtime tradition that allowed agency staff to make changes to an order’s text even after the final vote. Under Pai, editorial privileges were limited to technical and conforming edits only; substantive changes were not permitted unless they were proposed directly by a commissioner and only in response to new arguments offered by a dissenting commissioner. This reduces the likelihood of a significant change being introduced outside the public eye.
  • Fact Sheet: Adopting a suggestion of Commissioner Mignon Clyburn, Pai made it a practice to preface each published draft order with a one-page fact sheet that summarized the item in lay terms, as much as possible. This made the agency’s monthly work more accessible and transparent to members of the public who lacked the time to wade through the full text of each draft order.
  • Online Transparency Dashboard: Pai also launched an online dashboard on the agency’s website. This dashboard offers metrics on the number of items currently pending at the commission by category, as well as quarterly trends over time.
  • Restricting Comment on Upcoming Items: As a gesture of respect to fellow commissioners, Pai committed that the chairman’s office would not brief the press or members of the public, or publish a blog, about an upcoming matter before it was shared with other commissioners. This was another step toward reducing the strategic use of leaks or selective access to guide the tech media news cycle.

And while it’s technically not a transparency reform, Pai also deserves credit for his willingness to engage the public as the face of the agency. He was the first FCC commissioner to join Twitter, and throughout his chairmanship he maintained an active social media presence that helped personalize the agency and make it more accessible. His commitment to this channel is all the more impressive when one considers the way some opponents used these platforms to hurl a steady stream of hateful, often violent and racist invective at him during his tenure.

Pai deserves tremendous credit for spearheading these efforts to bring the agency out of the shadows and into the sunlight. Of course, he was not working alone. Pai shares credit with other commissioners and staff who supported transparency and worked to bring these policies to fruition, most notably former Commissioner O’Rielly, who beat a steady drum for process reform throughout his tenure.

We do not yet know who President Joe Biden will appoint as Pai’s successor. It is fair to assume that whomever is chosen will seek to put his or her own stamp on the agency. But let’s hope that enhanced transparency and the other process reforms enacted over the past four years remain a staple of agency practice moving forward. They may not be flashy, but they may prove to be the most significant and long-lasting impact of the Pai chairmanship.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Justin “Gus” Hurwitz is associate professor of law, the Menard Director of the Nebraska Governance and Technology Center, and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska College of Law. He is also director of law & economics programs at the International Center for Law & Economics.]

I was having a conversation recently with a fellow denizen of rural America, discussing how to create opportunities for academics studying the digital divide to get on-the-ground experience with the realities of rural telecommunications. He recounted a story from a telecom policy event in Washington, D.C., from not long ago. The story featured a couple of well-known participants in federal telecom policy as they were talking about how to close the rural digital divide. The punchline of the story was loud speculation from someone in attendance that neither of these bloviating telecom experts had likely ever set foot in a rural town.

And thus it is with most of those who debate and make telecom policy. The technical and business challenges of connecting rural America are different. Rural America needs different things out of its infrastructure than urban America. And the attitudes of both users and those providing service are different here than they are in urban America.

Federal Communications Commission Chairman Aji Pai—as I get to refer to him in writing for perhaps the last time—gets this. As is well-known, he is a native Kansan. He likely spent more time during his time as chairman driving rural roads than this predecessor spent hobnobbing at political fundraisers. I had the opportunity on one of these trips to visit a Nebraska farm with him. He was constantly running a bit behind schedule on this trip. I can attest that this is because he would wander off with a farmer to look at a combine or talk about how they were using drones to survey their fields. And for those cynics out there—I know there are some who don’t believe in the chairman’s interest in rural America—I can tell you that it meant a lot to those on the ground who had the chance to share their experiences.

Rural Digital Divide Policy on the Ground

Closing the rural digital divide is a defining public-policy challenge of telecommunications. It’s right there in the first sentence of the Communications Act, which established the FCC:

For the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States…a rapid, efficient, Nation-wide, and world-wide wire and radio communication service[.]

Depending on how one defines broadband internet, somewhere between 18 and 35 million Americans lack broadband internet access. No matter how you define it, however, most of those lacking access are in rural America.

It’s unsurprising why this is the case. Looking at North Dakota, South Dakota, and Nebraska—three of the five most expensive states to connect each household in both the 2015 and 2018 Connect America Fund models—the cost to connect a household to the internet in these states was twice that of connecting a household in the rest of the United States. Given the low density of households in these areas, often less than one household per square mile, there are relatively fewer economies of scale that allow carriers to amortize these costs across multiple households. We can add that much of rural America is both less wealthy than more urban areas and often doesn’t value the benefits of high-speed internet as highly. Taken together, the cost of providing service in these areas is much higher, and the demand for them much less, than in more urban areas.

On the flip side are the carriers and communities working to provide access. The reality in these states is that connecting those who live here is an all-hands-on-deck exercise. I came to Nebraska with the understanding that cable companies offer internet service via cable and telephone companies offer internet service via DSL or fiber. You can imagine my surprise the first time I spoke to a carrier who was using a mix of cable, DSL, fiber, microwave, and Wi-Fi to offer service to a few hundred customers. And you can also imagine my surprise when he started offering advice to another carrier—ostensibly a competitor—about how to get more performance out of some older equipment. Just last week, I was talking to a mid-size carrier about how they are using fixed wireless to offer service to customers outside of their service area as a stopgap until fiber gets out to the customer’s house.

Pai’s Progress Closing the Rural Digital Divide

This brings us to Chairman Pai’s work to close the rural digital divide. Literally on his first day on the job, he announced that his top priority was closing the digital divide. And he backed this up both with the commission’s agenda and his own time and attention.

On Chairman Pai’s watch, the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity. The recently completed RDOF auction promises to connect 10 million rural Americans to the internet; the 5G Fund will ensure that all but the most difficult-to-connect areas of the country will be covered by 5G mobile wireless. These are top-line items on Commissioner Pai’s resume as chairman. But it is important to recognize how much of a break they were from the commission’s previous approach to universal service and the digital divide. These funding mechanisms are best characterized by their technology-neutral, reverse-auction based approach to supporting service deployment.

This is starkly different from prior generations of funding, which focused on subsidizing specific carriers to provide specific levels of service using specific technologies. As I said above, the reality on the ground in rural America is that closing the digital divide is an all-hands-on-deck exercise. It doesn’t matter who is offering service or what technology they are using. Offering 10 mbps service today over a rusty barbed wire fence or a fixed wireless antenna hanging off the branch of a tree is better than offering no service or promising fiber that’s going to take two years to get into the ground. And every dollar saved by connecting one house with a lower-cost technology is a dollar that can be used to connect another house that may otherwise have gone unconnected.

The combination of the reverse-auction and technology-neutral approaches has made it possible for the commission to secure commitments to connect a record number of houses with high-speed internet over an incredibly short period of time.

Then there are the chairman’s accomplishments on the spectrum and wirelessinternet fronts. Here, he faced resistance from both within the government and industry. In some of the more absurd episodes of government in-fighting, he tangled with protectionist interests within the government to free up CBRS and other mid-band spectrum and to authorize new satellite applications. His support of fixed and satellite wireless has the potential to legitimately shake up the telecom industry. I honestly have no idea whether this is going to prove to be a good or bad bet in the long term—whether fixed wireless is going to be able to offer the quality and speed of service its proponents promise or whether it instead will be a short-run misallocation of capital that will require clawbacks and re-awards of funding in another few years—but the embrace of the technology demonstrated decisive leadership and thawed a too limited and ossified understanding of what technologies could be used to offer service. Again, as said above, closing the rural digital divide is an all-hands-on-deck problem; we do ourselves no favors by excluding possible solutions from our attempts to address it.

There is more that the commission did under Chairman Pai’s leadership, beyond the commission’s obvious order and actions, to close the rural digital divide. Over the past two years, I have had opportunities to work with academic colleagues from other disciplines on a range of federal funding opportunities for research and development relating to next generation technologies to support rural telecommunications, such as programs through the National Science Foundation. It has been wonderful to see increased FCC involvement in these programs. And similarly, another of Chairman Pai’s early initiatives was to establish the Broadband Deployment Advisory Committee. It has been rare over the past few years for me to be in a meeting with rural stakeholders that didn’t also include at least one member of a BDAC subcommittee. The BDAC process was a valuable way to communicate information up the chair, to make sure that rural stakeholders’ voices were heard in D.C.

But the BDAC process had another important effect: it made clear that there was someone in D.C. who was listening. Commissioner Pai said on his first day as chairman that closing the digital divide was his top priority. That’s easy to just say. But establishing a committee framework that ensures that stakeholders regularly engage with an appointed representative of the FCC, putting in the time and miles to linger with a farmer to talk about the upcoming harvest season, these things make that priority real.

Rural America certainly hopes that the next chair of the commission will continue to pay us as much attention as Chairman Pai did. But even if they don’t, we can rest with some comfort that he has set in motion efforts—from the next generation of universal service programs to supporting research that will help develop the technologies that will come after—that will serve us will for years to come.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Seth L. Cooper is director of policy studies and a senior fellow at the Free State Foundation.]

During Chairman Ajit Pai’s tenure, the Federal Communications Commission adopted key reforms that improved the agency’s processes. No less important than process reform is process integrity. The commission’s L-Band Order and the process that produced it will be the focus here. In that proceeding, Chairman Pai led a careful and deliberative process that resulted in a clearly reasoned and substantively supportable decision to put unused valuable L-Band spectrum into commercial use for wireless services.

Thanks to one of Chairman Pai’s most successful process reforms, the FCC now publicly posts draft items to be voted on three weeks in advance of the commission’s public meetings. During his chairmanship, the commission adopted reforms to help expedite the regulatory-adjudication process by specifying deadlines and facilitating written administrative law judge (ALJ) decisions rather than in-person hearings. The “Team Telecom” process also was reformed to promote faster agency determinations on matters involving foreign ownership.

Along with his process-reform achievements, Chairman Pai deserves credit for ensuring that the FCC’s proceedings were conducted in a lawful and sound manner. For example, the commission’s courtroom track record was notably better during Chairman Pai’s tenure than during the tenures of his immediate predecessors. Moreover, Chairman Pai deserves high marks for the agency process that preceded the L-Band Order – a process that was perhaps subject to more scrutiny than the process of any other proceeding during his chairmanship. The public record supports the integrity of that process, as well as the order’s merits.

In April 2020, the FCC unanimously approved an order authorizing Ligado Networks to deploy a next-generation mixed mobile-satellite network using licensed spectrum in the L-Band. This action is critical to alleviating the shortage of commercial spectrum in the United States and to ensuring our nation’s economic competitiveness. Ligado’s proposed network will provide industrial Internet-of-Things (IoT) services, and its L-Band spectrum has been identified as capable of pairing with C-Band and other mid-band spectrum for delivering future 5G services. According to the L-Band Order, Ligado plans to invest up to $800 million in network capabilities, which could create over 8,000 jobs. Economist Coleman Bazelon estimated that Ligado’s network could help create up to 3 million jobs and contribute up to $500 billion to the U.S. economy.

Opponents of the L-Band Order have claimed that Ligado’s proposed network would create signal interference with GPS services in adjacent spectrum. Moreover, in attempts to delay or undo implementation of the L-Band Order, several opponents lodged harsh but baseless attacks against the FCC’s process. Some of those process criticisms were made at a May 2020 Senate Armed Services Committee hearing that failed to include any Ligado representatives or any FCC commissioners for their viewpoints. And in a May 2020 floor speech, Sen. James Inhofe (R-Okla.) repeatedly criticized the commission’s process as sudden, hurried, and taking place “in the darkness of a weekend.”

But those process criticisms fail in the face of easily verifiable facts. Under Chairman Pai’s leadership, the FCC acted within its conceded authority, consistent with its lawful procedures, and with careful—even lengthy—deliberation.

The FCC’s proceeding concerning Ligado’s license applications dates back to 2011. It included public notice and comment periods in 2016 and 2018. An August 2019 National Telecommunications and Information Administration (NTIA) report noted the commission’s forthcoming decision. In the fall of 2019, the commission shared a draft of its order with NTIA. Publicly stated opposition to Ligado’s proposed network by GPS operators and Defense Secretary Mark Esper, as well as publicly stated support for the network by Attorney General William Barr and Secretary of State Mike Pompeo, ensured that the proceeding received ongoing attention. Claims of “surprise” when the commission finalized its order in April 2020 are impossible to credit.

Importantly, the result of the deliberative agency process helmed by Chairman Pai was a substantively supportable decision. The FCC applied its experience in adjudicating competing technical claims to make commercial spectrum policy decisions. It was persuaded in part by signal testing conducted by the National Advanced Spectrum and Communications Test Network, as well as testing by technology consultants Roberson and Associates. By contrast, the commission found unpersuasive reports of alleged signal interference involving military devices operating outside of their assigned spectrum band.

The FCC also applied its expertise in addressing potential harmful signal interference to incumbent operations in adjacent spectrum bands by imposing several conditions on Ligado’s operations. For example, the L-Band Order requires Ligado to adhere to its agreements with major GPS equipment manufacturers for resolving signal interference concerns. Ligado must dedicate 23 megahertz of its own licensed spectrum as a guard-band from neighboring spectrum and also reduce its base station power levels 99% compared to what Ligado proposed in 2015. The commission requires Ligado to expeditiously replace or repair any U.S. government GPS devices that experience harmful interference from its network. And Ligado must maintain “stop buzzer” capability to halt its network within 15 minutes of any request by the commission.

From a process standpoint, the L-Band Order is a commendable example of Chairman Pai’s perseverance in leading the FCC to a much-needed decision on an economically momentous matter in the face of conflicting government agency and market provider viewpoints. Following a careful and deliberative process, the commission persevered to make a decision that is amply supported by the record and poised to benefit America’s economic welfare.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.

Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.

What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.

That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.

But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.

That was the economy of scale.

The Structure of the Google Case

Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.

Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.

The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.

Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.

The problem of anticompetitive conduct is only slightly more difficult.

Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.

(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)

It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.

Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.

The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.

That leaves consumer harm. And here is where things get iffy.

Usage as a Product Improvement: A Very Convenient Argument

The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.

But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?

If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.

So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?

Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.

And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.

None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.

This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.

The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.

It turns out that has happened before.

Economies of Scale as a Product Improvement: Once a Convenient Argument

Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.

The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.

Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.

But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.

If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.

For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.

It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.

The Monopolistic Competition Revolution

The revolution came in the form of the theory of monopolistic competition and its cousin, the theory of creative destruction, developed between the 1920s and 1940s by Edward Chamberlin, Joan Robinson and Joseph Schumpeter.

These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.

From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.

The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.

What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.

This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.

Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.

Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.

It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.

Usage-Based Improvements Are Not Like Economies of Scale

The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.

But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.

Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?

Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.

Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.

A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.

To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.

If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.

And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.

But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.

Americans Keep Voting to Centralize the Internet

In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.

For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.

But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.

The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.

But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.

Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.

But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.

Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.

The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.

And that Google should win on consumer harm.

Remember the Telephone

Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.

The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.

Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.

Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.

Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.

The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.

The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.

The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.

Microsoft Not to the Contrary, Because Users Were in Common

Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.

As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.

That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.

The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.

The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.

This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.

It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.

The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.

That means that, unlike Google and rival search engines, Windows and Netscape shared users.

So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.

By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.

Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.

Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.

And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.

Germán Gutiérrez and Thomas Philippon have released a major rewrite of their paper comparing the U.S. and EU competitive environments. 

Although the NBER website provides an enticing title — “How European Markets Became Free: A Study of Institutional Drift” — the paper itself has a much more yawn-inducing title: “How EU Markets Became More Competitive Than US Markets: A Study of Institutional Drift.”

Having already critiqued the original paper at length (here and here), I wouldn’t normally take much interest in the do-over. However, in a recent episode of Tyler Cowen’s podcast, Jason Furman gave a shout out to Philippon’s work on increasing concentration. So, I thought it might be worth a review.

As with the original, the paper begins with a conclusion: The EU appears to be more competitive than the U.S. The authors then concoct a theory to explain their conclusion. The theory’s a bit janky, but it goes something like this:

  • Because of lobbying pressure and regulatory capture, an individual country will enforce competition policy at a suboptimal level.
  • Because of competing interests among different countries, a “supra-national” body will be more independent and better able to foster pro-competitive policies and to engage in more vigorous enforcement of competition policy.
  • The EU’s supra-national body and its Directorate-General for Competition is more independent than the U.S. Department of Justice and Federal Trade Commission.
  • Therefore, their model explains why the EU is more competitive than the U.S. Q.E.D.

If you’re looking for what this has to do with “institutional drift,” don’t bother. The term only shows up in the title.

The original paper provided evidence from 12 separate “markets,” that they say demonstrated their conclusion about EU vs. U.S. competitiveness. These weren’t really “markets” in the competition policy sense, they were just broad industry categories, such as health, information, trade, and professional services (actually “other business sector services”). 

As pointed out in one of my earlier critiques, In all but one of these industries, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent and the HHI measures reported in the original paper are at levels that most observers would presume to be competitive. 

Sending their original markets to drift in the appendices, Gutiérrez and Philippon’s revised paper focuses its attention on two markets — telecommunications and airlines — to highlight their claims that EU markets are more competitive than the U.S. First, telecoms:

To be more concrete, consider the Telecom industry and the entry of the French Telecom company Free Mobile. Until 2011, the French mobile industry was an oligopoly with three large historical incumbents and weak competition. … Free obtained its 4G license in 2011 and entered the market with a plan of unlimited talk, messaging and data for €20. Within six months, the incumbents Orange, SFR and Bouygues had reacted by launching their own discount brands and by offering €20 contracts as well. … The relative price decline was 40%: France went from being 15% more expensive than the US [in 2011] to being 25% cheaper in about two years [in 2013].

While this is an interesting story about how entry can increase competition, the story of a single firm entering a market in a single country is hardly evidence that the EU as a whole is more competitive than the U.S.

What Gutiérrez and Philippon don’t report is that from 2013 to 2019, prices declined by 12% in the U.S. and only 8% in France. In the EU as a whole, prices decreased by only 5% over the years 2013-2019.

Gutiérrez and Philippon’s passenger airline story is even weaker. Because airline prices don’t fit their narrative, they argue that increasing airline profits are evidence that the U.S. is less competitive than the EU. 

The picture above is from Figure 5 of their paper (“Air Transportation Profits and Concentration, EU vs US”). They claim that the “rise in US concentration and profits aligns closely with a controversial merger wave,” with the vertical line in the figure marking the Delta-Northwest merger.

Sure, profitability among U.S. firms increased. But, before the “merger wave,” profits were negative. Perhaps predatory pricing is pro-competitive after all.

Where Gutiérrez and Philippon really fumble is with airline pricing. Since the merger wave that pulled the U.S. airline industry out of insolvency, ticket prices (as measured by the Consumer Price Index), have decreased by 6%. In France, prices increased by 4% and in the EU, prices increased by 30%. 

The paper relies more heavily on eyeballing graphs than statistical analysis, but something about Table 2 caught my attention — the R-squared statistics. First, they’re all over the place. But, look at column (1): A perfect 1.00 R-squared. Could it be that Gutiérrez and Philippon’s statistical model has (almost) as many parameters as variables?

Notice that all the regressions with an R-squared of 0.9 or higher include country fixed effects. The two regressions with R-squareds of 0.95 and 0.96 also include country-industry fixed effects. It’s very possible that the regressions results are driven entirely by idiosyncratic differences among countries and industries. 

Gutiérrez and Philippon provide no interpretation for their results in Table 2, but it seems to work like this, using column (1): A 10% increase in the 4-firm concentration ratio (which is different from a 10 percentage point increase), would be associated with a 1.8% increase in prices four years later. So, an increase in CR4 from 20% to 22% (or an increase from 60% to 66%) would be associated with a 1.8% increase in prices over four years, or about 0.4% a year. On the one hand, I just don’t buy it. On the other hand, the effect is so small that it seems economically insignificant. 

I’m sure Gutiérrez and Philippon have put a lot of time into this paper and its revision. But there’s an old saying that the best thing about banging your head against the wall is that it feels so good when it stops. Perhaps, it’s time to stop with this paper and let it “drift” into obscurity.

Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development. 

Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.

There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move. 

What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets. 

The importance of profit-and-loss to economic calculation

One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace. 

This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs. 

The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.

For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.  

While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs. 

For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit. 

All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.

Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.

Municipal broadband as islands of chaos

Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.

The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.

This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes. 

Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.

But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.

Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate. 

Conclusion

There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.

As Thomas Sowell has noted many times, political debates often involve the use of words which if taken literally mean something very different than the connotations which are conveyed. Examples abound in the debate about broadband buildout. 

There is a general consensus on the need to subsidize aspects of broadband buildout to rural areas in order to close the digital divide. But this real need allows for strategic obfuscation of key terms in this debate by parties hoping to achieve political or competitive gain. 

“Access” and “high-speed broadband”

For instance, nearly everyone would agree that Internet policy should “promote access to high-speed broadband.” But how some academics and activists define “access” and “high-speed broadband” are much different than the average American would expect.

A commonsense definition of access is that consumers have the ability to buy broadband sufficient to meet their needs, considering the costs and benefits they face. In the context of the digital divide between rural and urban areas, the different options available to consumers in each area is a reflection of the very real costs and other challenges of providing service. In rural areas with low population density, it costs broadband providers considerably more per potential subscriber to build the infrastructure needed to provide service. At some point, depending on the technology, it is no longer profitable to build out to the next customer several miles down the road. The options and prices available to rural consumers reflects this unavoidable fact. Holding price constant, there is no doubt that many rural consumers would prefer higher speeds than are currently available to them. But this is not the real-world choice which presents itself. 

But access in this debate instead means the availability of the same broadband options regardless of where people live. Rather than being seen as a reflection of underlying economic realities, the fact that rural Americans do not have the same options available to them that urban Americans do is seen as a problem which calls out for a political solution. Thus, billions of dollars are spent in an attempt to “close the digital divide” by subsidizing broadband providers to build infrastructure to  rural areas. 

“High-speed broadband” similarly has a meaning in this debate significantly different from what many consumers, especially those lacking “high speed” service, expect. For consumers, fast enough is what allows them to use the Internet in the ways they desire. What is fast enough does change over time as more and more uses for the Internet become common. This is why the FCC has changed the technical definition of broadband multiple times over the years as usage patterns and bandwidth requirements change. Currently, the FCC uses 25Mbps down/3 Mbps up as the baseline for broadband.

However, for some, like Jonathan Sallet, this is thoroughly insufficient. In his Broadband for America’s Future: A Vision for the 2020s, he instead proposes “100 Mbps symmetrical service without usage limits.” The disconnect between consumer demand as measured in the marketplace in light of real trade-offs between cost and performance and this arbitrary number is not well-explained in this study. The assumption is simply that faster is better, and that the building of faster networks is a mere engineering issue once sufficiently funded and executed with enough political will.

But there is little evidence that consumers “need” faster Internet than the market is currently providing. In fact, one Wall Street Journal study suggests “typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.” Moreover, there is even less evidence that most consumers or businesses need anything close to upload speeds of 100 Mbps. For even intensive uses like high-resolution live streaming, recommended upload speeds still fall far short of 100 Mbps. 

“Competition” and “Overbuilding”

Similarly, no one objects to the importance of “competition in the broadband marketplace.” But what is meant by this term is subject to vastly different interpretations.

The number of competitors is not the same as the amount of competition. Competition is a process by which market participants discover the best way to serve consumers at lowest cost. Specific markets are often subject to competition not only from the firms which exist within those markets, but also from potential competitors who may enter the market any time potential profits reach a point high enough to justify the costs of entry. An important inference from this is that temporary monopolies, in the sense that one firm has a significant share of the market, is not in itself illegal under antitrust law, even if they are charging monopoly prices. Potential entry is as real in its effects as actual competitors in forcing incumbents to continue to innovate and provide value to consumers. 

However, many assume the best way to encourage competition in broadband buildout is to simply promote more competitors. A significant portion of Broadband for America’s Future emphasizes the importance of subsidizing new competition in order to increase buildout, increase quality, and bring down prices. In particular, Sallet emphasizes the benefits of municipal broadband, i.e. when local governments build and run their own networks. 

In fact, Sallet argues that fears of “overbuilding” are really just fears of competition by incumbent broadband ISPs:

Language here is important. There is a tendency to call the construction of new, competitive networks in a locality with an existing network “overbuilding”—as if it were an unnecessary thing, a useless piece of engineering. But what some call “overbuilding” should be called by a more familiar term: “Competition.” “Overbuilding” is an engineering concept; “competition” is an economic concept that helps consumers because it shifts the focus from counting broadband networks to counting the dollars that consumers save when they have competitive choices. The difference is fundamental—overbuilding asks whether the dollars spent to build another network are necessary for the delivery of a communications service; economics asks whether spending those dollars will lead to competition that allows consumers to spend less and get more. 

Sallet makes two rhetorical moves here to make his argument. 

The first is redefining “overbuilding,” which refers to literally building a new network on top of (that is, “over”) previously built architecture, as a ploy by ISPs to avoid competition. But this is truly Orwellian. When a new entrant can build over an incumbent and take advantage of the first-mover’s investments to enter at a lower cost, a failure to compensate the first-mover is free riding. If the government compels such free riding, it reduces incentives for firms to make the initial investment to build the infrastructure.

The second is defining competition as the number of competitors, even if those competitors need to be subsidized by the government in order to enter the marketplace.  

But there is no way to determine the “right” number of competitors in a given market in advance. In the real world, markets don’t match blackboard descriptions of perfect competition. In fact, there are sometimes high fixed costs which limit the number of firms which will likely exist in a competitive market. In some markets, known as natural monopolies, high infrastructural costs and other barriers to entry relative to the size of the market lead to a situation where it is cheaper for a monopoly to provide a good or service than multiple firms in a market. But it is important to note that only firms operating under market pressures can assess the viability of competition. This is why there is a significant risk in government subsidizing entry. 

Competition drives sustained investment in the capital-intensive architecture of broadband networks, which suggests that ISPs are not natural monopolies. If they were, then having a monopoly provider regulated by the government to ensure the public interest, or government-run broadband companies, may make sense. In fact, Sallet denies ISPs are natural monopolies, stating that “the history of telecommunications regulation in the United States suggests that monopolies were a result of policy choices, not mandated by any iron law of economics” and “it would be odd for public policy to treat the creation of a monopoly as a success.” 

As noted by economist George Ford in his study, The Impact of Government-Owned Broadband Networks on Private Investment and Consumer Welfare, unlike the threat of entry which often causes incumbents to act competitively even in the absence of competitors, the threat of subsidized entry reduces incentives for private entities to invest in those markets altogether. This includes both the incentive to build the network and update it. Subsidized entry may, in fact, tip the scales from competition that promotes consumer welfare to that which could harm it. If the market only profitably sustains one or two competitors, adding another through municipal broadband or subsidizing a new entrant may reduce the profitability of the incumbent(s) and eventually lead to exit. When this happens, only the government-run or subsidized network may survive because the subsidized entrant is shielded from the market test of profit-and-loss.

The “Donut Hole” Problem

The term “donut hole” is a final example to consider of how words can be used to confuse rather than enlighten in this debate.

There is broad agreement that to generate the positive externalities from universal service, there needs to be subsidies for buildout to high-cost rural areas. However, this seeming agreement masks vastly different approaches. 

For instance, some critics of the current subsidy approach have identified a phenomenon where the city center has multiple competitive ISPs and government policy extends subsidies to ISPs to build out broadband coverage into rural areas, but there is relatively paltry Internet services in between due to a lack of private or public investment. They describe this as a “donut hole” because the “unserved” rural areas receive subsidies while “underserved” outlying parts immediately surrounding town centers receive nothing under current policy.

Conceptually, this is not a donut hole. It is actually more like a target or bullseye, where the city center is served by private investment and the rural areas receive subsidies to be served. 

Indeed, there is a different use of the term donut hole, which describes how public investment in city centers can create a donut hole of funding needed to support rural build-out. Most Internet providers rely on profits from providing lower-cost service to higher-population areas (like city centers) to cross-subsidize the higher cost of providing service in outlying and rural areas. But municipal providers generally only provide municipal service — they only provide lower-cost service. This hits the carriers that serve higher-cost areas with a double whammy. First, every customer that municipal providers take from private carriers cuts the revenue that those carriers rely on to provide service elsewhere. Second, and even more problematic, because the municipal providers have lower costs (because they tend not to serve the higher-cost outlying areas), they can offer lower prices for service. This “competition” exerts downward pressure on the private firms’ prices, further reducing revenue across their entire in-town customer base. 

This version of the “donut hole,” in which the revenues that private firms rely on from the city center to support the costs of providing service to outlying areas has two simultaneous effects. First, it directly reduces the funding available to serve more rural areas. And, second, it increases the average cost of providing service across its network (because it is no longer recovering as much of its costs from the lower-cost city core), which increases the prices that need to be charged to rural users in order to justify offering service at all.

Conclusion

Overcoming the problem of the rural digital divide starts with understanding why it exists. It is simply more expensive to build networks in areas with low population density. If universal service is the goal, subsidies, whether explicit subsidies from government or implicit cross-subsidies by broadband companies, are necessary to build out to these areas. But obfuscations about increasing “access to high-speed broadband” by promoting “competition” shouldn’t control the debate.

Instead, there needs to be a nuanced understanding of how government-subsidized entry into the broadband marketplace can discourage private investment and grow the size of the “donut hole,” thereby leading to demand for even greater subsidies. Policymakers should avoid exacerbating the digital divide by prioritizing subsidized competition over market processes.

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

Every 5 years, Congress has to reauthorize the sunsetting provisions of the Satellite Television Extension and Localism Act (STELA). And the deadline for renewing the law is quickly approaching (Dec. 31). While sunsetting is, in the abstract, seemingly a good thing to ensure rules don’t become outdated, there are an interlocking set of interest groups who, generally speaking, only support reauthorizing the law because they are locked in a regulatory stalemate. STELA no longer represents an optimal outcome for many if not most of the affected parties. The time is now for finally allowing STELA to sunset, and using this occasion to further reform the underlying regulatory morass it is built upon.

Since the creation of STELA in 1988, much has changed in the marketplace. At the time of the 1992 Cable Act (the first year data from the FCC’s Video Competition Reports is available), cable providers served 95% of multichannel video subscribers. Now, the power of cable has waned to the extent that 2 of the top 4 multichannel video programming distributors (MVPDs) are satellite providers, without even considering the explosion in competition from online video distributors like Netflix and Amazon Prime.

Given these developments, Congress should reconsider whether STELA is necessary at all, along with the whole complex regulatory structure undergirding it, and consider the relative simplicity with which copyright and antitrust law are capable of adequately facilitating the market for broadcast content negotiations. An approach building upon that contemplated in the bipartisan Modern Television Act of 2019 by Congressman Steve Scalise (R-LA) and Congresswoman Anna Eshoo (D-CA)—which would repeal the compulsory license/retransmission consent regime for both cable and satellite—would be a step in the right direction.

A brief history of STELA

STELA, originally known as the 1988 Satellite Home Viewer Act, was originally justified as necessary to promote satellite competition against incumbent cable networks and to give satellite companies stronger negotiating positions against network broadcasters. In particular, the goal was to give satellite providers the ability to transmit terrestrial network broadcasts to subscribers. To do this, this regulatory structure modified the Communications Act and the Copyright Act. 

With the 1988 Satellite Home Viewer Act, Congress created a compulsory license for satellite retransmissions under Section 119 of the Copyright Act. This compulsory license provision mandated, just as the Cable Act did for cable providers, that satellite would have the right to certain network broadcast content in exchange for a government-set price (despite the fact that local network affiliates don’t necessarily own the copyrights themselves). The retransmission consent provision requires satellite providers (and cable providers under the Cable Act) to negotiate with network broadcasters for the fee to be paid for the right to network broadcast content. 

Alternatively, broadcasters can opt to impose must-carry provisions on cable and satellite  in lieu of retransmission consent negotiations. These provisions require satellite and cable operators to carry many channels from network broadcasters in order to have access to their content. As ICLE President Geoffrey Manne explained to Congress previously:

The must-carry rules require that, for cable providers offering 12 or more channels in their basic tier, at least one-third of these be local broadcast retransmissions. The forced carriage of additional, less-favored local channels results in a “tax on capacity,” and at the margins causes a reduction in quality… In the end, must-carry rules effectively transfer significant programming decisions from cable providers to broadcast stations, to the detriment of consumers… Although the ability of local broadcasters to opt in to retransmission consent in lieu of must-carry permits negotiation between local broadcasters and cable providers over the price of retransmission, must-carry sets a floor on this price, ensuring that payment never flows from broadcasters to cable providers for carriage, even though for some content this is surely the efficient transaction.

The essential question about the reauthorization of STELA regards the following provisions: 

  1. an exemption from retransmission consent requirements for satellite operators for the carriage of distant network signals to “unserved households” while maintaining the compulsory license right for those signals (modification of the compulsory license/retransmission consent regime);
  2. the prohibition on exclusive retransmission consent contracts between MVPDs and network broadcasters (per se ban on a business model); and
  3. the requirement that television broadcast stations and MVPDs negotiate in good faith (nebulous negotiating standard reviewed by FCC).

This regulatory scheme was supposed to sunset after 5 years. Instead of actually sunsetting, Congress has consistently reauthorized STELA ( in 1994, 1999, 2004, 2010, and 2014).

Each time, satellite companies like DirecTV & Dish Network, as well as interest groups representing rural customers who depend heavily on satellite for cable television, strongly supported the renewal of the legislation. Over time, though, the reauthorization has led to amendments supported by major players from each side of the negotiating table and broad support for what is widely considered “must-pass” legislation. In other words, every affected industry found something they liked about the compromise legislation.

As it stands, the sunset provision of STELA has meant that it gives each side negotiating leverage during the next round of reauthorization talks, and often concessions are drawn. But rather than simplifying this regulatory morass, STELA reauthorization simply extends rules that have outlived their purpose.

Current marketplace competition undermines the necessity of STELA reauthorization

The marketplace is very different in 2019 than it was when STELA’s predecessors were adopted and reauthorized. No longer is it the case that cable dominates and that satellite and other providers need a leg up just to compete. Moreover, there are now services that didn’t even exist when the STELA framework was first developed. Competition is thriving.

Wikipedia:

RankServiceSubscribersProviderType
1.Xfinity21,986,000ComcastCable
2.DirecTV19,222,000AT&TSatellite
3.Spectrum16,606,000CharterCable
4.Dish9,905,000Dish NetworkSatellite
5.Verizon Fios TV4,451,000VerizonFiber-Optic
6.Cox Cable TV4,015,000Cox EnterprisesCable
7.U-Verse TV3,704,000AT&TFiber-Optic
8.Optimum/Suddenlink3,307,500Altice USACable
9.Sling TV*2,417,000Dish NetworkLive Streaming
10.Hulu with Live TV2,000,000Hulu(Disney, Comcast, AT&T)Live Streaming
11.DirecTV Now1,591,000AT&TLive Streaming
12.YouTube TV1,000,000Google(Alphabet)Live Streaming
13.Frontier FiOS838,000FrontierFiber-Optic
14.Mediacom776,000MediacomCable
15.PlayStation Vue500,000SonyLive Streaming
16.CableOne Cable TV326,423Cable OneCable
17.FuboTV250,000FuboTVLive Streaming

A 2018 accounting of the largest MVPDs by subscribers shows that satellite is 2 of the top 4, and that over-the-top services like Sling TV, Hulu with LiveTV, and YouTube TV are gaining significantly. And this does not even consider (non-live) streaming services such as Netflix (approximately 60 million US subscribers), Hulu (about 28 million US subscribers) and Amazon Prime Video (which has about 40 million users in the US). It is not clear from these numbers that satellite needs special rules in order to compete with cable, or that the complex regulatory regime underlying STELA is necessary anymore.

On the contrary, there seems to be a lot of reason to believe that content is king, and the market for the distribution of that content is thriving. Competition among platforms is intense, not only among MVPDs like Comcast, DirecTV, Charter, and Dish Network, but from streaming services like Netflix, Amazon Prime Video, Hulu, and HBONow. Distribution networks heavily invest in exclusive content to attract consumers. There is no reason to think that we need selective forbearance from the byzantine regulations in this space in order to promote satellite adoption when satellite companies are just as good as any at contracting for high-demand content (for instance DirecTV with NFL Sunday Ticket). 

A better way forward: Streamlined regulation in the form of copyright and antitrust

As Geoffrey Manne said in his Congressional testimony on STELA reauthorization back in 2013: 

behind all these special outdated regulations are laws of general application that govern the rest of the economy: antitrust and copyright. These are better, more resilient rules. They are simple rules for a complex world. They will stand up far better as video technology evolves–and they don’t need to be sunsetted.

Copyright law establishes clearly defined rights, thereby permitting efficient bargaining between content owners and distributors. But under the compulsory license system, the copyright holders’ right to a performance license is fundamentally abridged. Retransmission consent normally requires fees to be paid for the content that MVPDs have available to them. But STELA exempts certain network broadcasts (“distant signals” for “unserved households”) from retransmission consent requirements. This reduces incentives to develop content subject to STELA, which at the margin harms both content creators and viewers. It also gives satellite an unfair advantage vis-a-vis cable in those cases it does not need to pay ever-rising retransmission consent fees. Ironically, it also reduces the incentive for satellite providers (DirecTV, at least) to work to provide local content to some rural consumers. Congress should reform the law to allow copyright holders to have their full rights under the Copyright Act again. Congress should also repeal the compulsory license and must-carry provisions that work at cross-purposes and allow true marketplace negotiations.

The initial allocation of property rights guaranteed under copyright law would allow for MVPDs, including satellite providers, to negotiate with copyright holders for content, and thereby realize a more efficient set of content distribution outcomes than is otherwise possible. Under the compulsory license/retransmission consent regime underlying both STELA and the Cable Act, the outcomes at best approximate those that would occur through pure private ordering but in most cases lead to economically inefficient results because of the thumb on the scale in favor of the broadcasters. 

In a similar way, just as copyright law provides a superior set of bargaining conditions for content negotiation, antitrust law provides a superior mechanism for policing potentially problematic conduct between the firms involved. Under STELA, the FCC polices transactions with a “good faith” standard. In an important sense, this ambiguous regulatory discretion provides little information to prospective buyers and sellers of licenses as to what counts as “good faith” negotiations (aside from the specific practices listed).

By contrast, antitrust law, guided by the consumer welfare standard and decades of case law, is designed both to deter potential anticompetitive foreclosure and also to provide a clear standard for firms engaged in the marketplace. The effect of relying on antitrust law to police competitive harms is — as the name of the standard suggest — a net increase in the welfare of consumers, the ultimate beneficiaries of a well functioning market. 

For instance, consider a hypothetical dispute between a network broadcaster and a satellite provider. Under the FCC’s “good faith” oversight, bargaining disputes, which are increasingly resulting in blackouts, are reviewed for certain negotiating practices deemed to be unfair, 47 CFR § 76.65(b)(1), and by a more general “totality of the circumstances” standard, 47 CFR § 76.65(b)(2). This is both over- and under-inclusive as the negotiating practices listed in (b)(1) may have procompetitive benefits in certain circumstances, and the (b)(2) totality of the circumstances standard is vague and ill-defined. By comparison, antitrust claims would be adjudicated through a foreseeable process with reference to a consumer welfare standard illuminated by economic evidence and case law.

If a satellite provider alleges anticompetitive foreclosure by a refusal to license, its claims would be subject to analysis under the Sherman Act. In order to prove its case, it would need to show that the network broadcaster has power in a properly defined market and is using that market power to foreclose competition by leveraging its ownership over network content to the detriment of consumer welfare. A court would then analyze whether this refusal of a duty to deal is a violation of antitrust law under the Trinko and Aspen Skiing standards. Economic evidence would need to be introduced that supports the allegation. 

And, critically, in this process, the defendants would be entitled to raise evidence in their case — both evidence suggesting that there was no foreclosure, as well as evidence of procompetitive justifications for decisions that otherwise may be considered foreclosure. Ultimately, a court, bound by established, nondiscretionary standards would weigh the evidence and make a determination. It is, of course, possible, that a review for “good faith” conduct could reach the correct result, but there is simply not a similarly rigorous process available to consistently push it in that direction.

The above-mentioned Modern Television Act of 2019 does represent a step in the right direction, as it would repeal the compulsory license/retransmission consent regime applied to both cable and satellite operators. However, it is imperfect as it does leave must carry requirements in place for local content and retains the “good faith” negotiating standard to be enforced by the FCC. 

Expiration is better than the status quo even if fundamental reform is not possible

Some scholars who have written on this issue, and very much agree that fundamental reform is needed, nonetheless argue that STELA should be renewed if more fundamental reforms like those described above can’t be achieved. For instance, George Ford recently wrote that 

With limited days left in the legislative calendar before STELAR expires, there is insufficient time for a sensible solution to this complex issue. Senate Commerce Committee Chairman Roger Wicker (R-Miss.) has offered a “clean” STELAR reauthorization bill to maintain the status quo, which would provide Congress with some much-needed breathing room to begin tackling the gnarly issue of how broadcast signals can be both widely retransmitted and compensated. Congress and the Trump administration should welcome this opportunity.

However, even in a world without more fundamental reform, it is not clear that satellite needs distant signals in order to compete with cable. The number of “short markets”—i.e. those without access to all four local network broadcasts—implicated by the loss of distant signals is relatively few. Regardless of how bad the overall regulatory scheme needs to be updated, it makes no sense to continue to preserve STELA’s provisions that benefit satellite when it is no longer necessary on competition grounds.

Conclusion

Congress should not only let STELA sunset, but it should consider reforming the entire compulsory license/retransmission consent regime as the Modern Television Act of 2019 aims to do. In fact, reformers should look to go even further in repealing must-carry provisions and the good faith negotiating standard enforced by the FCC. Copyright and antitrust law are much better rules for this constantly evolving space than the current sector-specific rules. 

For previous work from ICLE on STELA see The Future of Video Marketplace Regulation (written testimony of ICLE President Geoffrey Manne from June 12, 2013) and Joint Comments of ICLE and TechFreedom, In the Matter of STELA Reauthorization and Video Programming Reform (March 19, 2014).