In a May 3 op-ed in The New York Times, Federal Trade Commission (FTC) Chair Lina Khan declares that “We Must Regulate A.I. Here’s How.” I’m concerned after reading it that I missed both the regulatory issue and the “here’s how” part, although she does tell us that “enforcers and regulators must be vigilant.”
Indeed, enforcers should be vigilant in exercising their established authority, pace not-a-little controversy about the scope of the FTC’s authority.
Most of the chair’s column reads like a parade of horribles. And there’s nothing wrong with identifying risks, even if not every worry represents a serious risk. As Descartes said—or, at least, sort of implied—feelings are never wrong, qua feelings. If one has a thought, it’s hard to deny that one is having it.
To be clear, I can think of non-fanciful instantiations of the floats in Khan’s parade. Artificial intelligence (AI) could be used to commit fraud, which is and ought to be unlawful. Enforcers should be on the lookout for new forms of fraud, as well as new instances of it. Antitrust violations, likewise, may occur in the tech sector, just as they’ve been found in the hospital sector, electrical manufacturing, and air travel.
Tech innovations entail costs as well as benefits, and we ought to be alert to both. But there’s a real point to parsing those harms from benefits—and the actual from the likely from the possible—if one seeks to identify and balance the tradeoffs entailed by conduct that may or may not cause harm on net.
Doing so can be complicated. AI is not just ChatGPT; it’s not just systems that employ foundational large language learning models; and it’s not just systems that employ one or another form of machine learning. It’s not all (or chiefly) about fraud. The regulatory problem is not just what to do about AI but what to do about…what?
That is, what forms, applications, or consequences do we mean to address, and how and why? If some AI application costs me my job, is that a violation of the FTC Act? Some other law? Abstracting from my own preferences and inflated sense of self-importance, is it a federal issue?
If one is to enforce the law or engage in regulation, there’s a real need to be specific about one’s subject matter, as well as what one plans to do about it, lest one throw out babies with bathwater. Which reminds me of parts of a famous (for certain people of a certain age) essay in 1970s computer science: Drew McDermott’s, “Artificial Intelligence Meets Natural Stupidity,” which is partly about oversimplification in characterizing AI.
The cynic in me has three basic worries about Khan’s FTC, if not about AI generally:
Vigilance is not so much a method as a state of mind (or part of a slogan, or a motto, sometimes put in Latin). It’s about being watchful.
The commission’s current instantiation won’t stop at vigilance, and it won’t stick to established principles of antitrust and consumer-protection law, or to its established jurisdiction.
Doing so without being clear on what counts as an actionable harm under Section 5 of the FTC Act risks considerable damage to innovation, and to the consumer benefits produced by such innovation.
Perhaps I’m not being all that cynical, given the commission’s expansive new statement of enforcement principles regarding unfair methods of competition (UMC), not to mention the raft of new FTC regulatory proposals. For example, the Khan’s op-ed includes a link to the FTC’s proposed commercial surveillance and data security rulemaking, as Khan notes (without specifics) that “innovative services … came at a steep cost. What were initially conceived of as free services were monetized through extensive surveillance of people and businesses that used them.”
That reads like targeted advertising (as opposed to blanket advertising) engaged in cosplay as the Stasi:
I’ll never talk.
Oh, yes, you’ll talk. You’ll talk or else we’ll charge you for some of your favorite media.
Ok, so maybe I’ll talk a little.
Here again, it’s not that one couldn’t object to certain acquisitions or applications of consumer data (on some or another definition of “consumer data”). It’s that the concerns purported to motivate regulation read like a laundry list of myriad potential harms with barely a nod to the possibility—much less the fact—of benefits. Surveillance, we’re told in the FTC’s notice of proposed rulemaking, involves:
…the collection, aggregation, retention, analysis, transfer, or monetization of consumer data and the direct derivatives of that information. These data include both information that consumers actively provide—say, when they affirmatively register for a service or make a purchase—as well as personal identifiers and other information that companies collect, for example, when a consumer casually browses the web or opens an app.
That seems to encompass, roughly, anything one might do with data somehow connected to a consumer. For example, there’s the storage of information I voluntarily provide when registering for an airline’s rewards program, because I want the rewards miles. And there’s the information my physician collects, stores, and analyzes in treating me and maintaining medical records, including—but not limited to—things I tell the doctor because I want informed medical treatment.
Anyone might be concerned that personal medical information might be misused. It turns out that there are laws against various forms of misuse, but those laws are imperfect. But are all such practices really “surveillance”? Don’t many have some utility? Incidentally, don’t many consumers—as studies indicate—prefer arrangements whereby they can obtain “content” without a monetary payment? Should all such practices be regulated by the FTC without a new congressional charge, or allocated under a general prohibition of either UMC or “unfair and deceptive acts or practices” (UDAP)? The commission is, incidentally, considering either or both as grounds.
By statute, the FTC’s “unfairness” authority extends only to conduct that “causes or is likely to cause substantial injury to consumers which is not reasonably avoided by consumers themselves.” And it does not cover conduct where those costs are “outweighed by countervailing benefits to consumers or competition.” So which ones are those?
Chair Khan tells us that we have “an online economy where access to increasingly essential services is conditioned on widespread hoarding and sale of our personal data.” “Essential” seems important, if unspecific. And “hoarding” seems bad, if undistinguished from legitimate collection and storage. It sounds as if Google’s servers are like a giant ball of aluminum foil distributed across many cluttered, if virtual, apartments.
Khan breezily assures readers that the:
…FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly evolving A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.
But I wonder whether concerns about AI—both those well-founded and those fanciful—all fit under these rubrics. And there’s really no explanation for how the agency means to parse, say, unlawful mergers (under the Sherman and/or Clayton acts) from lawful ones, whether they are to do with AI or not.
We’re told that a “handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools.” Perhaps, but why link to a newspaper article about Google and Microsoft for “powerful businesses” without establishing any relevant violations of the law? And why link to an article about Google and Nvidia AI systems—which are not raw materials—in suggesting that some firms control “essential” raw materials (as inputs) to innovation, without any further explanation? Was there an antitrust violation?
Maybe we already regulate AI in various ways. And maybe we should consider some new ones. But I’m stuck at the headline of Khan’s piece: Must we regulate further? If so, how? And not incidentally, why, and at what cost?
[Closing out Week Two of our FTC UMC Rulemaking symposium is a contribution from a very special guest: Commissioner Noah J. Phillips of the Federal Trade Commission. You can find other posts at thesymposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]
In his July Executive Order, President Joe Biden called on the Federal Trade Commission (FTC) to consider making a series of rules under its purported authority to regulate “unfair methods of competition.” Chair Lina Khan has previously voiced her support for doing so. My view is that the Commission has no such rulemaking powers, and that the scope of the authority asserted would amount to an unconstitutional delegation of power by the Congress. Others have written about those issues, and we can leave them for another day. Professors Richard Pierce and Gus Hurwitz have each written that, if FTC rulemaking is to survive judicial scrutiny, it must apply to conduct that is covered by the antitrust laws.
That idea raises an inherent tension between the concept of rulemaking and the underlying law. Proponents of rulemaking advocate “clear” rules to, in their view, reduce ambiguity, ensure predictability, promote administrability, and conserve resources otherwise spent on ex post, case-by-case adjudication. To the extent they mean administrative adoption of per se illegality standards by rulemaking, it flies in the face of contemporary antitrust jurisprudence, which has been moving from per se standards back to the historical “rule of reason.”
Recognizing that the Sherman Act could be read to bar all contracts, federal courts for over a century have interpreted the 1890 antitrust law only to apply to “unreasonable” restraints of trade. The Supreme Court first adopted this concept in its landmark 1911 decision in Standard Oil, upholding the lower court’s dissolution of John D. Rockefeller’s Standard Oil Company. Just four years after the Federal Trade Commission Act was enacted, the Supreme Courtestablished the “the prevailing standard of analysis” for determining whether an agreement constitutes an unreasonable restraint of trade under Section 1 of the Sherman Act. Justice Louis Brandeis, who as an adviser to President Woodrow Wilson was instrumental in creating the FTC, described the scope of this “rule of reason” inquiry in the Chicago Board of Trade case:
The true test of legality is whether the restraint imposed is such as merely regulates and perhaps thereby promotes competition or whether it is such as may suppress or even destroy competition. To determine that question the court must ordinarily consider the facts peculiar to the business to which the restraint is applied; its condition before and after the restraint was imposed; the nature of the restraint and its effect, actual or probable. The history of the restraint, the evil believed to exist, the reason for adopting the particular remedy, the purpose or end sought to be attained, are all relevant facts.
The rule of reason was and remains today a fact-specific inquiry, but the Court also determined from early on that certain restraints invited a different analytical approach: per se prohibitions. The per se rule involves no weighing of the restraint’s procompetitive effects. Once proven, a restraint subject to the per se rule is presumed to be unreasonable and illegal.In the 1911 Dr. Miles case, the Court held that resale minimum price fixing was illegal per se under Section 1. It found horizontal price-fixing agreements to be per se illegal in Socony Vacuum. Since Socony Vacuum, the Court has limited the application of per se illegality to bid rigging (a form of horizontal price fixing), horizontal market divisions, tying, and group boycotts.
Starting in the 1970s, especially following research demonstrating the benefits to consumers of a number of business arrangements and contracts previously condemned by courts as per se illegal, the Court began to limit the categories of conduct that received per se treatment. In 1977, in GTE Sylvania, the Courtheld that vertical customer and territorial restraints should be judged under the rule of reason. In 1979, in BMI, it held that a blanket license issued by a clearinghouse of copyright owners that set a uniform price and prevented individual negotiation with licensees was a necessary precondition for the product and was thus subject to the rule of reason. In 1984, in Jefferson Parish, the Court rejected automatic application of the per se rule to tying. A year later, the Court held that the per se rule did not apply to all group boycotts. In 1997, in State Oil Company v. Khan, it held that maximum resale price fixing is not per se illegal. And, in 2007, the Court held that minimum resale price fixing should also be assessed under the rule of reason. In Leegin, the Court made clear that the per se rule is not the norm for analyzing the reasonableness of restraints; rather, the rule of reason is the “accepted standard for testing” whether a practice is unreasonable.
More recent Court decisions reflect the Court’s refusal to expand the scope of “quick look” analysis, an application of the rule of reason that nonetheless truncates the necessary fact-finding for liability where “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” In 2013, the Supreme Court rejected an FTC request to require courts to apply the “quick look” approach to reverse-payment settlement agreements.The Court has also backed away from presumptive rules of legality. In American Needle, the Court stripped the National Football League of Section 1 immunity by holding that the NFL is not entitled to the single entity defense under Copperweld and instead, its conduct must be analyzed under the “flexible” rule of reason. And last year, in NCAA v. Alston, the Court rejected the National Collegiate Athletic Association’s argument that it should have benefited from a “quick look”, restating that “most restraints challenged under the Sherman Act” are subject to the rule of reason.
The message from the Court is clear: rules are the exception, not the norm. It “presumptively applies rule of reason analysis” and applies the per se rule only to restraints that “lack any redeeming virtue.” Per se rules are reserved for “conduct that is manifestly anticompetitive” and that “would always or almost always tend to restrict competition and decrease output.” And that’s a short list. What is more, the Leegin Court made clear that administrative convenience—part of the justification for administrative rules—cannot in and of itself be sufficient to justify application of the per se rule.
The Court’s warnings about per se rules ring just as true for rules that could be promulgated under the Commission’s purported UMC rulemaking authority, which would function just as a per se rule would. Proof of the conduct ends the inquiry. No need to demonstrate anticompetitive effects. No procompetitive justifications. No efficiencies. No balancing.
But if the Commission attempts administratively to adopt per se rules, it will run up against precedents making clear that the antitrust laws do not abide such rules. This is not simply a matter of the—already controversial—historical attempts by the agency to define under Section 5 conduct that goes outside the Sherman Act. Rather, establishing per se rules about conduct covered under the rule of reason effectively overrules Supreme Court precedent. For example, the Executive Order contemplates the FTC promulgating a rule concerning pay-for-delay settlements. But, to the extent it can fashion rules, the agency can only prohibit by rule that which is illegal. To adopt a per se ban on conduct covered by the rule of reason is to take out of the analysis the justifications for and benefits of the conduct in question. And while the FTC Act enables the agency some authority to prohibit conduct outside the scope of the Sherman Act, it does not do away with consideration of justifications or benefits when determining whether a practice is an “unfair method of competition.” As a result, the FTC cannot condemn categorically via rulemaking conduct that the courts have refused to condemn as per se illegal, and instead have analyzed under the rule of reason. Last year, the FTC docketed a petition filed by the Open Markets Institute and others to ban “exclusionary contracts” by monopolists and other “dominant firms” under the agency’s unfair methods of competition authority. The precise scope is not entirely clear from the filing, but courts have held consistently that some conduct clearly covered (e.g., exclusive dealing) is properly evaluated under the rule of reason.
The Supreme Court has been loath to bless per se rules by courts. Rules are blunt instruments and not appropriately applied to conduct that the effect of which is not so clearly negative. Except for the “obvious,” an analysis of whether a restraint is unreasonable is not a “simple matter” and “easy labels do not always supply ready answers.”  Over the decades, the Court has rebuked lower courts attempting to apply rules to conduct properly evaluated under the rule of reason. Should the Commission attempt the same administratively, or if it attempts administratively to rewrite judicial precedents, it would be rewriting the antitrust law itself and tempting a similar fate.
See e.g., Bd. of Trade v. United States, 246 U.S. 231, 238 (1918) (explaining that “the legality of an agreement . . . cannot be determined by so simple a test, as whether it restrains competition. Every agreement concerning trade … restrains. To bind, to restrain, is of their very essence”); Nat’l Soc’y of Prof’l Eng’rs v. United States, 435 U.S. 679, 687-88 (1978) (“restraint is the very essence of every contract; read literally, § 1 would outlaw the entire body of private contract law”).
 Standard Oil Co., v. United States, 221 U.S. 1 (1911).
See Continental T.V. v. GTE Sylvania, 433 U.S. 36, 49 (1977) (“Since the early years of this century a judicial gloss on this statutory language has established the “rule of reason” as the prevailing standard of analysis…”). See also State Oil Co. v. Khan, 522 U.S. 3, 10 (1997) (“most antitrust claims are analyzed under a ‘rule of reason’ ”); Arizona v. Maricopa Cty. Med. Soc’y, 457 U.S. 332, 343 (1982) (“we have analyzed most restraints under the so-called ‘rule of reason’ ”).
 Chicago Board of Trade v. United States, 246 U.S. 231, 238 (1918).
 Dr. Miles Med. Co. v. John D. Park & Sons Co., 220 U.S. 373 (1911).
 United States v. Socony-Vacuum Oil Co., 310 U.S. 150 (1940).
 See e.g., United States v. Joyce, 895 F.3d 673, 677 (9th Cir. 2018); United States v. Bensinger, 430 F.2d 584, 589 (8th Cir. 1970).
 United States v. Sealy, Inc., 388 U.S. 350 (1967).
 Northern P. R. Co. v. United States, 356 U.S. 1 (1958).
 NYNEX Corp. v. Discon, Inc., 525 U.S. 128 (1998).
 Continental T.V. v. GTE Sylvania, 433 U.S. 36 (1977).
 Broadcast Music, Inc. v. Columbia Broadcasting System, Inc. 441 U.S. 1 (1979).
 Jefferson Parish Hosp. Dist. No. 2 v. Hyde, 466 U.S. 2 (1984).
 Northwest Wholesale Stationers, Inc. v. Pacific Stationery & Printing Co., 472 U.S. 284 (1985).
 State Oil Company v. Khan, 522 U.S. 3 (1997).
 Leegin Creative Leather Prods., Inc. v. PSKS, Inc. 551 U.S. 877, 885 (2007).
 California Dental Association v. FTC, 526 U.S. 756, 770 (1999).
 Leegin Creative Leather Prods., Inc. v. PSKS, Inc. 551 U.S. 877, 885 (2007).
 Business Electronics Corp. v. Sharp Electronics Corp., 485 U.S. 717, 723 (1988).
 Rohit Chopra & Lina M. Khan, The Case for “Unfair Methods of Competition” Rulemaking, 87 U. Chi. L. Rev. 357 (2020).
 Leegin Creative Leather Prods., Inc. v. PSKS, Inc. 551 U.S. 877, 886-87 (2007).
 The FTC’s attempts to bring cases condemning conduct as a standalone Section 5 violation were not successful. See e.g., Boise Cascade Corp. v. FTC, 637 F.2d 573 (9th Cir. 1980); Airline Guides, Inc. v. FTC, 630 F.2d 920 (2d Cir. 1980); E.I. du Pont de Nemours & Co. v. FTC, 729 F.2d 128 (2d Cir. 1984).
 Supreme Court precedent confirms that Section 5 of the FTC Act does not limit “unfair methods of competition” to practices that violate other antitrust laws (i.e., Sherman Act, Clayton Act). See e.g., FTC v. Ind. Fed’n of Dentists, 476 U.S. 447, 454 (1986); FTC v. Sperry & Hutchinson Co., 405 U.S. 233, 244 (1972); FTC v. Brown Shoe Co., 384 U.S. 316, 321 (1966); FTC v. Motion Picture Advert. Serv. Co., 344 U.S. 392, 394-95 (1953); FTC v. R.F. Keppel & Bros., Inc., 291 U.S. 304, 309-310 (1934).
 The agency also has recognized recently that such agreements are subject to the Rule of Reason under the FTC Act, which decisions was upheld by the U.S. Court of Appeals for the Fifth Circuit. Impax Labs., Inc. v. FTC, No. 19-60394 (5th Cir. 2021).
 OMI Petition at 71 (“Given the real evidence of harm from certain exclusionary contracts and the specious justifications presented in their favor, the FTC should ban exclusivity with customers, distributors, or suppliers that results in substantial market foreclosure as per se illegal under the FTC Act. The present rule of reason governing exclusive dealing by all firms is infirm on multiple grounds.”) But see e.g., ZF Meritor, LLC v. Eaton Corp., 696 F.3d 254, 271 (3d Cir. 2012) (“Due to the potentially procompetitive benefits of exclusive dealing agreements, their legality is judged under the rule of reason.”).
 Broadcast Music, Inc. v. Columbia Broadcasting System, Inc. 441 U.S. 1, 8-9 (1979).
See e.g., Continental T.V. v. GTE Sylvania, 433 U.S. 36 (1977) (holding that nonprice vertical restraints have redeeming value and potential procompetitive justification and therefore are unsuitable for per se review); United States Steel Corp. v. Fortner Enters., Inc., 429 U.S. 610 (1977) (rejecting the assumption that tying lacked any purpose other than suppressing competition and recognized tying could be procompetitive); FTC v. Indiana Federation of Dentists, 476 U.S. 447 (1986) (declining to apply the per se rule even though the conduct at issue resembled a group boycott).
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Justin “Gus” Hurwitz is associate professor of law, the Menard Director of the Nebraska Governance and Technology Center, and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska College of Law. He is also director of law & economics programs at the International Center for Law & Economics.]
I was having a conversation recently with a fellow denizen of rural America, discussing how to create opportunities for academics studying the digital divide to get on-the-ground experience with the realities of rural telecommunications. He recounted a story from a telecom policy event in Washington, D.C., from not long ago. The story featured a couple of well-known participants in federal telecom policy as they were talking about how to close the rural digital divide. The punchline of the story was loud speculation from someone in attendance that neither of these bloviating telecom experts had likely ever set foot in a rural town.
And thus it is with most of those who debate and make telecom policy. The technical and business challenges of connecting rural America are different. Rural America needs different things out of its infrastructure than urban America. And the attitudes of both users and those providing service are different here than they are in urban America.
Federal Communications Commission Chairman Aji Pai—as I get to refer to him in writing for perhaps the last time—gets this. As is well-known, he is a native Kansan. He likely spent more time during his time as chairman driving rural roads than this predecessor spent hobnobbing at political fundraisers. I had the opportunity on one of these trips to visit a Nebraska farm with him. He was constantly running a bit behind schedule on this trip. I can attest that this is because he would wander off with a farmer to look at a combine or talk about how they were using drones to survey their fields. And for those cynics out there—I know there are some who don’t believe in the chairman’s interest in rural America—I can tell you that it meant a lot to those on the ground who had the chance to share their experiences.
Rural Digital Divide Policy on the Ground
Closing the rural digital divide is a defining public-policy challenge of telecommunications. It’s right there in the first sentence of the Communications Act, which established the FCC:
For the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States…a rapid, efficient, Nation-wide, and world-wide wire and radio communication service[.]
Depending on how one defines broadband internet, somewhere between 18 and 35 million Americans lack broadband internet access. No matter how you define it, however, most of those lacking access are in rural America.
It’s unsurprising why this is the case. Looking at North Dakota, South Dakota, and Nebraska—three of the five most expensive states to connect each household in both the 2015 and 2018 Connect America Fund models—the cost to connect a household to the internet in these states was twice that of connecting a household in the rest of the United States. Given the low density of households in these areas, often less than one household per square mile, there are relatively fewer economies of scale that allow carriers to amortize these costs across multiple households. We can add that much of rural America is both less wealthy than more urban areas and often doesn’t value the benefits of high-speed internet as highly. Taken together, the cost of providing service in these areas is much higher, and the demand for them much less, than in more urban areas.
On the flip side are the carriers and communities working to provide access. The reality in these states is that connecting those who live here is an all-hands-on-deck exercise. I came to Nebraska with the understanding that cable companies offer internet service via cable and telephone companies offer internet service via DSL or fiber. You can imagine my surprise the first time I spoke to a carrier who was using a mix of cable, DSL, fiber, microwave, and Wi-Fi to offer service to a few hundred customers. And you can also imagine my surprise when he started offering advice to another carrier—ostensibly a competitor—about how to get more performance out of some older equipment. Just last week, I was talking to a mid-size carrier about how they are using fixed wireless to offer service to customers outside of their service area as a stopgap until fiber gets out to the customer’s house.
Pai’s Progress Closing the Rural Digital Divide
This brings us to Chairman Pai’s work to close the rural digital divide. Literally on his first day on the job, he announced that his top priority was closing the digital divide. And he backed this up both with the commission’s agenda and his own time and attention.
On Chairman Pai’s watch, the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity. The recently completed RDOF auction promises to connect 10 million rural Americans to the internet; the 5G Fund will ensure that all but the most difficult-to-connect areas of the country will be covered by 5G mobile wireless. These are top-line items on Commissioner Pai’s resume as chairman. But it is important to recognize how much of a break they were from the commission’s previous approach to universal service and the digital divide. These funding mechanisms are best characterized by their technology-neutral, reverse-auction based approach to supporting service deployment.
This is starkly different from prior generations of funding, which focused on subsidizing specific carriers to provide specific levels of service using specific technologies. As I said above, the reality on the ground in rural America is that closing the digital divide is an all-hands-on-deck exercise. It doesn’t matter who is offering service or what technology they are using. Offering 10 mbps service today over a rusty barbed wire fence or a fixed wireless antenna hanging off the branch of a tree is better than offering no service or promising fiber that’s going to take two years to get into the ground. And every dollar saved by connecting one house with a lower-cost technology is a dollar that can be used to connect another house that may otherwise have gone unconnected.
The combination of the reverse-auction and technology-neutral approaches has made it possible for the commission to secure commitments to connect a record number of houses with high-speed internet over an incredibly short period of time.
Then there are the chairman’s accomplishments on the spectrum and wireless–internet fronts. Here, he faced resistance from both within the government and industry. In some of the more absurd episodes of government in-fighting, he tangled withprotectionist interestswithin the governmentto free up CBRS and other mid-band spectrum and to authorize new satellite applications. His support of fixed and satellite wireless has the potential to legitimately shake up the telecom industry. I honestly have no idea whether this is going to prove to be a good or bad bet in the long term—whether fixed wireless is going to be able to offer the quality and speed of service its proponents promise or whether it instead will be a short-run misallocation of capital that will require clawbacks and re-awards of funding in another few years—but the embrace of the technology demonstrated decisive leadership and thawed a too limited and ossified understanding of what technologies could be used to offer service. Again, as said above, closing the rural digital divide is an all-hands-on-deck problem; we do ourselves no favors by excluding possible solutions from our attempts to address it.
There is more that the commission did under Chairman Pai’s leadership, beyond the commission’s obvious order and actions, to close the rural digital divide. Over the past two years, I have had opportunities to work with academic colleagues from other disciplines on a range of federal funding opportunities for research and development relating to next generation technologies to support rural telecommunications, such as programs through the National Science Foundation. It has been wonderful to see increased FCC involvement in these programs. And similarly, another of Chairman Pai’s early initiatives was to establish the Broadband Deployment Advisory Committee. It has been rare over the past few years for me to be in a meeting with rural stakeholders that didn’t also include at least one member of a BDAC subcommittee. The BDAC process was a valuable way to communicate information up the chair, to make sure that rural stakeholders’ voices were heard in D.C.
But the BDAC process had another important effect: it made clear that there was someone in D.C. who was listening. Commissioner Pai said on his first day as chairman that closing the digital divide was his top priority. That’s easy to just say. But establishing a committee framework that ensures that stakeholders regularly engage with an appointed representative of the FCC, putting in the time and miles to linger with a farmer to talk about the upcoming harvest season, these things make that priority real.
Rural America certainly hopes that the next chair of the commission will continue to pay us as much attention as Chairman Pai did. But even if they don’t, we can rest with some comfort that he has set in motion efforts—from the next generation of universal service programs to supporting research that will help develop the technologies that will come after—that will serve us will for years to come.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Thomas B. Nachbar is a professor of law at the University of Virginia School of Law and a senior fellow at the Center for National Security Law.]
It would be impossible to describe Ajit Pai’s tenure as chair of the Federal Communications Commission as ordinary. Whether or not you thought his regulatory style or his policies were innovative, his relationship with the public has been singular for an FCC chair. His Reese’s mug, alone, has occupied more space in the American media landscape than practically any past FCC chair. From his first day, he has attracted consistent, highly visible criticism from a variety of media outlets, although at least John Oliver didn’t describe him as a dingo. Just today, I read that Ajit Pai single handedly ruined the internet, which when I got up this morning seemed to be working pretty much the same way it was four years ago.
I might be biased in my view of Ajit. I’ve known him since we were law school classmates, when he displayed the same zeal and good-humored delight in confronting hard problems that I’ve seen in him at the commission. So I offer my comments not as an academic and student of FCC regulation, but rather as an observer of the communications regulatory ecosystem that Ajit has dominated since his appointment. And while I do not agree with everything he’s done at the commission, I have admired his single-minded determination to pursue policies that he believes will expand access to advanced telecommunications services. One can disagree with how he’s pursued that goal—and many have—but characterizing his time as chair in any other way simply misses the point. Ajit has kept his eye on expanding access, and he has been unwavering in pursuit of that objective, even when doing so has opened him to criticism, which is the definition of taking political risk.
The decision to include SpaceX is at one level unremarkable. SpaceX proposes to offer broadband internet access through low-Earth-orbit satellites, which is the kind of thing that is completely amazing but is becoming increasingly un-amazing as communications technology advances. SpaceX’s decision to use satellites is particularly valuable for initiatives like the RDOF, which specifically seek to provide services where previous (largely terrestrial) services have not. That is, in fact, the whole point of the RDOF, a point that sparked fiery debate over the FCC’s decision to focus the first phase of the RDOF on areas with no service rather than areas with some service. Indeed, if anything typifies the current tenor of the debate (at the center of which Ajit Pai has resided since his confirmation as chair), it is that a policy decision over which kind of under-served areas should receive more than $16 billion in federal funding should spark such strongly held views. In the end, SpaceX was awarded $885.5 million to participate in the RDOF, almost 10% of the first-round funds awarded.
But on a different level, the decision to include SpaceX is extremely remarkable. Elon Musk, SpaceX’s pot-smoking CEO, does not exactly fit regulatory stereotypes. (Disclaimer: I personally trust Elon Musk enough to drive my children around in one of his cars.) Even more significantly, SpaceX’s Starlink broadband service doesn’t actually exist as a commercial product. If you go to Starlink’s website, you won’t find a set of splashy webpages featuring products, services, testimonials, and a variety of service plans eager for a monthly assignation with your credit card or bank account. You will be greeted with a page asking for your email and service address in case you’d like to participate in Starlink’s beta program. In the case of my address, which is approximately 100 miles from the building where the FCC awarded SpaceX over $885 million to participate in the RDOF, Starlink is not yet available. I will, however, “be notified via email when service becomes available in your area,” which is reassuring but doesn’t get me any closer to watching cat videos.
That is perhaps why Chairman Pai was initially opposed to including SpaceX in the low-latency portion of the RDOF. SpaceX was offering unproven technology and previous satellite offerings had been high-latency, which is good for some uses but not others.
But then, an even more remarkable thing happened, at least in Washington: a regulator at the center of a controversial issue changed his mind and—even more remarkably—admitted his decision might not work out. When the final order was released, SpaceX was allowed to bid for low-latency RDOF funds even though the commission was “skeptical” of SpaceX’s ability to deliver on its low-latency promise. Many doubted that SpaceX would be able to effectively compete for funds, but as we now know, that decision led to SpaceX receiving a large share of the Phase I funds. Of course, that means that if SpaceX doesn’t deliver on its latency promises, a substantial part of the RDOF Phase I funds will fail to achieve their purpose, and the FCC will have backed the wrong horse.
I think we are unlikely to see such regulatory risk-taking, both technically and politically, in what will almost certainly be a more politically attuned commission in the coming years. Even less likely will be acknowledgments of uncertainty in the commission’s policies. Given the political climate and the popular attention policies like network neutrality have attracted, I would expect the next chair’s views about topics like network neutrality to exhibit more unwavering certainty than curiosity and more resolve than risk-taking. The most defining characteristic of modern communications technology and markets is change. We are all better off with a commission in which the other things that can change are minds.
President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.
The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services.
History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it.
Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:
Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.
The Perennial Threat of Price Controls
Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation.
Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.”
But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)).
This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:
The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)
For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation.
Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:
President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)
[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.
And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).
Rate Regulation by Any Other Name
Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes:
The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.
The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.
What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”
States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:
In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.
The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:
[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several newsstories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication.
The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:
Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.
The Economics of Price Controls
Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”
The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.
Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:
When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)
Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:
The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)
Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.
None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment).
Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”:
consumer access to lawful content, applications, and services; or
content providers’ ability to distribute lawful content, applications or services.
These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….
…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.
Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.
Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation.
While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.
Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.
Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development.
Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.
There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move.
What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets.
The importance of profit-and-loss to economic calculation
One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace.
This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs.
The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.
For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.
While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs.
For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit.
All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.
Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.
Municipal broadband as islands of chaos
Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.
The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.
This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes.
Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.
But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.
Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate.
There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Steve Cernak, (Partner, Bona Law).]
The antitrust laws have not been suspended during the current COVID-19 crisis. But based on questions received from clients plus others discussed with other practitioners, the changed economic conditions have raised some new questions and put a new slant on some old ones.
Under antitrust law’s flexible rule of reason standard, courts and enforcers consider the competitive effect of most actions under current and expected economic conditions. Because those conditions have changed drastically, at least temporarily, perhaps the antitrust assessments of certain actions will be different. Also, in a crisis, good businesses consider new options and reconsider others that had been rejected under the old conditions. So antitrust practitioners and enforcers need to be prepared for new questions and reconsiderations of others under new facts. Here are some that might cross their desks.
Benchmarking had its antitrust moment a few years ago as practitioners discovered and began to worry about this form of communication with competitors. Both before and since then, the comparison of processes and metrics to industry bests to determine where improvement efforts should be concentrated has not raised serious antitrust issues – if done properly. Appropriate topic choice and implementation, often involving counsel review and third-party collection, should stay the same during this crisis. Companies implementing new processes might be tempted to reach out to competitors to learn best practices. Any of those companies unfamiliar with the right way to benchmark should get up to speed. Counsel must be prepared to help clients quickly, but properly, benchmark some suddenly important activities, like methods for deep-cleaning workplaces.
Joint ventures where competitors work together to accomplish a task that neither could alone, or accomplish it more efficiently, have always received a receptive antitrust review. Often, those joint efforts have been temporary. Properly structured ones have always required the companies to remain competitors outside the joint venture. Joint efforts among competitors that did not make sense before the crisis might make perfect sense during it. For instance, a company whose distribution warehouse has been shut down by a shelter in place order might be able to use a competitor’s distribution assets to continue to get goods to the market.
Some joint ventures of competitors have received special antitrust assurances for decades. The National Cooperative Research and Production Act of 1993 was originally passed in 1984 to protect research joint ventures of competitors. It was later extended to certain joint production efforts and standard development organizations. The law confirms that certain joint ventures of competitors will be judged under the rule of reason. If the parties file a very short notice with the DOJ Antitrust Division and FTC, they also will receive favorable treatment regarding damages and attorney’s fees in any antitrust lawsuit. For example, competitors cooperating on the development of new virus treatments might be able to use NCRPA to protect joint research and even production of the cure.
Horizontal mergers that permanently combine the assets of two competitors are unlikely to be justified under the antitrust laws by small transitory blips in the economic landscape. A huge crisis, however, might be so large and create such long-lasting effects that certain mergers suddenly might make sense, both on business and antitrust grounds. That rationale was used during the most recent economic crisis to justify several large mergers of banks although other large industrial mergers considered at the same time were abandoned for various reasons. It is not yet clear if that reasoning is present in any industry now.
Remote communication among competitors
On a much smaller but more immediate scale, the new forms of communication being used while so many of us are physically separated have raised questions about the usual antitrust advice regarding communication with competitors. Antitrust practitioners have long advised clients about how to prepare and conduct an in-person meeting of competitors, say at a trade association convention. That same advice would seem to apply if, with the in-person convention cancelled, the meeting will be held via Teams or Zoom. And don’t forget: The reminders that the same rules apply to the cocktail party at the bar after the meeting should also be given for the virtual version conducted via Remo.co.
While antitrust law is focused on actions by private parties that might prevent markets from properly working to serve consumers, the same rationales apply to unnecessary government interference in the market. The current health crisis has turned the spotlight back on certificate of need laws, a form of “brother may I?” government regulation that can allow current competitors to stifle entry by new competitors. Similarly, regulations that have slowed the use of telemedicine have been at least temporarily waived.
Solving the current health crisis and rebuilding the economy will take the best efforts of both our public institutions and private companies. Antitrust law as currently written and enforced can and should continue to play a role in aligning incentives so we need not rely on “the benevolence of the butcher” for our dinner and other necessities. Instead, proper application of antitrust law can allow companies to do their part to (reviving a slogan helpful in a prior national crisis) keep America rolling.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Joshua D. Wright is university professor and executive director of the Global Antitrust Institute at George Mason University’s Scalia Law School. He served as a commissioner of the Federal Trade Commission from 2013 through 2015.]
Much of this symposium celebrates Ajit’s contributions as chairman of the Federal Communications Commission and his accomplishments and leadership in that role. And rightly so. But Commissioner Pai, not just Chairman Pai, should also be recognized.
I first met Ajit when we were both minority commissioners at our respective agencies: the FCC and Federal Trade Commission. Ajit had started several months before I was confirmed. I watched his performance in the minority with great admiration. He reached new heights when he shifted from minority commissioner to chairman, and the accolades he will receive for that work are quite appropriate. But I want to touch on his time as a minority commissioner at the FCC and how that should inform the retrospective of his tenure.
Let me not bury the lead: Ajit Pai has been, in my view, the most successful, impactful minority commissioner in the history of the modern regulatory state. And it is that success that has led him to become the most successful and impactful chairman, too.
I must admit all of this success makes me insanely jealous. My tenure as a minority commissioner ran in parallel with Ajit. We joked together about our fierce duel to be the reigning king of regulatory dissents. We worked together fighting against net neutrality. We compared notes on dissenting statements and opinions. I tried to win our friendly competition. I tried pretty hard. And I lost; worse than I care to admit. But we had fun. And I very much admired the combination of analytical rigor, clarity of exposition, and intellectual honesty in his work. Anyway, the jealousy would be all too much if he weren’t also a remarkable person and friend.
The life of a minority commissioner can be a frustrating one. Like Sisyphus, the minority commissioner often wakes up each day to roll the regulatory (well, in this case, deregulatory) boulder up the hill, only to watch it roll down. And then do it again. And again. At times, it is an exhausting series of jousting matches with the windmills of Washington bureaucracy. It is not often that a minority commissioner has as much success as Commissioner Pai did: dissenting opinions ultimately vindicated by judicial review; substantive victories on critical policy issues; paving the way for institutional and procedural reforms.
It is one thing to write a raging dissent about how the majority has lost all principles. Fire and brimstone come cheap when there aren’t too many consequences to what you have to say. Measure a man after he has been granted power and a chance to use it, and only then will you have a true test of character. Ajit passes that test like few in government ever have.
This is part of what makes Ajit Pai so impressive. I have seen his work firsthand. The multitude of successes Ajit achieved as Chairman Pai were predictable, precisely because Commissioner Pai told the world exactly where he stood on important telecommunications policy issues, the reasons why he stood there, and then, well, he did what he said he would. The Pai regime was much more like a Le’Veon Bell run, between the tackles, than a no-look pass from Patrick Mahomes to Tyreek Hill. Commissioner Pai shared his playbook with the world; he told us exactly where he was going to run the ball. And then Chairman Pai did exactly that. And neither bureaucratic red tape nor political pressure—or even physical threat—could stop him.
Here is a small sampling of his contributions, many of them building on groundwork he laid in the minority:
Focus on Economic Analysis
One of Chairman Pai’s most important contributions to the FCC is his work to systematically incorporate economic analysis into FCC decision-making. The triumph of this effort was establishing the Office of Economic Analysis (OEA) in 2018. The OEA focus on conducting economic analyses of the costs, benefits, and economic impacts of the commission’s proposed rules will be a critical part of agency decision-making from here on out. This act alone would form a legacy any agency head could easily rest their laurels on. The OEA’s work will shape the agency for decades and ensure that agency decisions are made with the oversight economics provides.
This is a hard thing to do; just hiring economists is not enough. Structure matters. How economists get information to decision-makers determines if it will be taken seriously. To this end, Ajit has taken all the lessons from what has made the economists at the FTC so successful—and the lessons from the structural failures at other agencies—and applied them at the FCC.
Structural independence looks like “involving economists on cross-functional teams at the outset and allowing the economics division to make its own, independent recommendations to decision-makers.” And it is necessary for economics to be taken seriously within an agency structure. Ajit has assured that FCC decision-making will benefit from economic analysis for years to come.
Narrowing the Digital Divide
Chairman Pai made helping the disadvantaged get connected to the internet and narrowing the digital divide the top priorities during his tenure. And Commissioner Pai was fighting for this long before the pandemic started.
As businesses, schools, work, and even health care have moved online, the need to get Americans connected with high-speed broadband has never been greater. Under Pai’s leadership, the FCC has removed bureaucratic barriers and provided billions in funding to facilitate rural broadband buildout. We are talking about connections to some 700,000 rural homes and businesses in 45 states, many of whom are gaining access to high-speed internet for the first time.
Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind. Tribal communities, particularly in the rural West, have been a keen focus of his, as he knows all-too-well the difficulties and increased costs associated with servicing those lands. He established programs to rebuild and expand networks in the Virgin Islands and Puerto Rico in an effort to bring the islands to parity with citizens living on the mainland.
You need not take my word for it; he really does talk about this all the time. As he said in a speech at the National Tribal Broadband Summit: “Since my first day in this job, I’ve said that closing the digital divide was my top priority. And as this audience knows all too well, nowhere is that divide more pronounced than on Tribal lands.“ That work is not done; it is beyond any one person. But Ajit should be recognized for his work bridging the divide and laying the foundation for future gains.
And again, this work started as minority commissioner. Before he was chairman, Pai proposed projects for rural broadband development; he frequently toured underserved states and communities; and he proposed legislation to offer the 21st century promise to economically depressed areas of the country. Looking at Chairman Pai is only half the picture.
Keeping Americans Connected
One would not think that the head of the Federal Communications Commission would be a leader on important health-care issues, but Ajit has made a real difference here too. One of his major initiatives has been the development of telemedicine solutions to expand access to care in critical communities.
Beyond encouraging buildout of networks in less-connected areas, Pai’s FCC has also worked to allocate funding for health-care providers and educational institutions who were navigating the transition to remote services. He ensured that health-care providers’ telecommunications and information services were funded. He worked with the U.S. Department of Education to direct funds for education stabilization and allowed schools to purchase additional bandwidth. And he granted temporary additional spectrum usage to broadband providers to meet the increased demand upon our nation’s networks. Oh, and his Keep Americans Connected Pledge gathered commitment from more than 800 companies to ensure that Americans would not lose their connectivity due to pandemic-related circumstances. As if the list were not long enough, Congress’ January coronavirus relief package will ensure that these and other programs, like Rip and Replace, will remain funded for the foreseeable future.
I might sound like I am beating a dead horse here, but the seeds of this, too, were laid in his work in the minority. Here he is describing his work in a 2015 interview, as a minority commissioner:
My own father is a physician in rural Kansas, and I remember him heading out in his car to visit the small towns that lay 40 miles or more from home. When he was there, he could provide care for people who would otherwise never see a specialist at all. I sometimes wonder, back in the 1970s and 1980s, how much easier it would have been on patients, and him, if broadband had been available so he could provide healthcare online.
Agency Transparency and Democratization
Many minority commissioners like to harp on agency transparency. Some take a different view when they are in charge. But Ajit made good on his complaints about agency transparency when he became Chairman Pai. He did this through circulating draft items well in advance of monthly open meetings, giving people the opportunity to know what the agency was voting on.
You used to need a direct connection with the FCC to even be aware of what orders were being discussed—the worst of the D.C. swamp—but now anyone can read about the working items, in clear language.
These moves toward a more transparent, accessible FCC dispel the impression that the agency is run by Washington insiders who are disconnected from the average person. The meetings may well be dry and technical—they really are—but Chairman Pai’s statements are not only good-natured and humorous, but informative and substantive. The public has been well-served by his efforts here.
Incentivizing Innovation and Next-Generation Technologies
Chairman Pai will be remembered for his encouragement of innovation. Under his chairmanship, the FCC discontinued rules that unnecessarily required carriers to maintain costly older, lower-speed networks and legacy voice services. It streamlined the discontinuance process for lower-speed services if the carrier is already providing higher-speed service or if no customers are using the service. It also okayed streamlined notice following force majeure events like hurricanes to encourage investment and deployment of newer, faster infrastructure and services following destruction of networks. The FCC also approved requests by companies to provide high-speed broadband through non-geostationary orbit satellite constellations and created a streamlined licensing process for small satellites to encourage faster deployment.
This is what happens when you get a tech nerd at the head of an agency he loves and cares for. A serious commitment to good policy with an eye toward the future.
Restoring Internet Freedom
This is a pretty sensitive one for me. You hear less about it now, other than some murmurs from the Biden administration about changing it, but the debate over net neutrality got nasty and apocalyptic.
It was everywhere; people saying Chairman Pai would end the internet as we know it. The whole web blacked out for a day in protest. People mocked up memes showing a 25 cent-per-Google-search charge. And as a result of this over-the-top rhetoric, my friend, and his family, received death threats.
That is truly beyond the pale. One could not blame anyone for leaving public service in such an environment. I cannot begin to imagine what I would have done in Ajit’s place. But Ajit took the threats on his life with grace and dignity, never lost his sense of humor, and continued to serve the public dutifully with remarkable courage. I think that says a lot about him. And the American public is lucky to have benefited from his leadership.
Now, for the policy stuff. Though it should go without saying, thelight-touch framework Chairman Pai returned us to—as opposed to the public utility one—will ensure that the United States maintains its leading position on technological innovation in 5G networks and services. The fact that we have endured COVID—and the massive strain on the internet it has caused—with little to no noticeable impact on internet services is all the evidence you need he made the right choice. Ajit has rightfully earned the title of the “5G Chairman.”
I cannot give Ajit all the praise he truly deserves without sounding sycophantic, or bribed. There are any number of windows into his character, but one rises above the rest for me. And I wanted to take the extra time to thank Ajit for it.
Every year, without question, no matter what was going on—even as chairman—Ajit would come to my classes and talk to my students. At length. In detail. And about any subject they wished. He stayed until he answered all of their questions. If I didn’t politely shove him out of the class to let him go do his real job, I’m sure he would have stayed until the last student left. And if you know anything about how to judge a person’s character, that will tell you all you need to know.
As Thomas Sowell has noted many times, political debates often involve the use of words which if taken literally mean something very different than the connotations which are conveyed. Examples abound in the debate about broadband buildout.
There is a general consensus on the need to subsidize aspects of broadband buildout to rural areas in order to close the digital divide. But this real need allows for strategic obfuscation of key terms in this debate by parties hoping to achieve political or competitive gain.
“Access” and “high-speed broadband”
For instance, nearly everyone would agree that Internet policy should “promote access to high-speed broadband.” But how some academics and activists define “access” and “high-speed broadband” are much different than the average American would expect.
A commonsense definition of access is that consumers have the ability to buy broadband sufficient to meet their needs, considering the costs and benefits they face. In the context of the digital divide between rural and urban areas, the different options available to consumers in each area is a reflection of the very real costs and other challenges of providing service. In rural areas with low population density, it costs broadband providers considerably more per potential subscriber to build the infrastructure needed to provide service. At some point, depending on the technology, it is no longer profitable to build out to the next customer several miles down the road. The options and prices available to rural consumers reflects this unavoidable fact. Holding price constant, there is no doubt that many rural consumers would prefer higher speeds than are currently available to them. But this is not the real-world choice which presents itself.
But access in this debate instead means the availability of the same broadband options regardless of where people live. Rather than being seen as a reflection of underlying economic realities, the fact that rural Americans do not have the same options available to them that urban Americans do is seen as a problem which calls out for a political solution. Thus, billions of dollars are spent in an attempt to “close the digital divide” by subsidizing broadband providers to build infrastructure to rural areas.
“High-speed broadband” similarly has a meaning in this debate significantly different from what many consumers, especially those lacking “high speed” service, expect. For consumers, fast enough is what allows them to use the Internet in the ways they desire. What is fast enough does change over time as more and more uses for the Internet become common. This is why the FCC has changed the technical definition of broadband multiple times over the years as usage patterns and bandwidth requirements change. Currently, the FCC uses 25Mbps down/3 Mbps up as the baseline for broadband.
However, for some, like Jonathan Sallet, this is thoroughly insufficient. In his Broadband for America’s Future: A Vision for the 2020s, he instead proposes “100 Mbps symmetrical service without usage limits.” The disconnect between consumer demand as measured in the marketplace in light of real trade-offs between cost and performance and this arbitrary number is not well-explained in this study. The assumption is simply that faster is better, and that the building of faster networks is a mere engineering issue once sufficiently funded and executed with enough political will.
But there is little evidence that consumers “need” faster Internet than the market is currently providing. In fact, one Wall Street Journal study suggests “typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.” Moreover, there is even less evidence that most consumers or businesses need anything close to upload speeds of 100 Mbps. For even intensive uses like high-resolution live streaming, recommended upload speeds still fall far short of 100 Mbps.
“Competition” and “Overbuilding”
Similarly, no one objects to the importance of “competition in the broadband marketplace.” But what is meant by this term is subject to vastly different interpretations.
The number of competitors is not the same as the amount of competition. Competition is a process by which market participants discover the best way to serve consumers at lowest cost. Specific markets are often subject to competition not only from the firms which exist within those markets, but also from potential competitors who may enter the market any time potential profits reach a point high enough to justify the costs of entry. An important inference from this is that temporary monopolies, in the sense that one firm has a significant share of the market, is not in itself illegal under antitrust law, even if they are charging monopoly prices. Potential entry is as real in its effects as actual competitors in forcing incumbents to continue to innovate and provide value to consumers.
However, many assume the best way to encourage competition in broadband buildout is to simply promote more competitors. A significant portion of Broadband for America’s Future emphasizes the importance of subsidizing new competition in order to increase buildout, increase quality, and bring down prices. In particular, Sallet emphasizes the benefits of municipal broadband, i.e. when local governments build and run their own networks.
In fact, Sallet argues that fears of “overbuilding” are really just fears of competition by incumbent broadband ISPs:
Language here is important. There is a tendency to call the construction of new, competitive networks in a locality with an existing network “overbuilding”—as if it were an unnecessary thing, a useless piece of engineering. But what some call “overbuilding” should be called by a more familiar term: “Competition.” “Overbuilding” is an engineering concept; “competition” is an economic concept that helps consumers because it shifts the focus from counting broadband networks to counting the dollars that consumers save when they have competitive choices. The difference is fundamental—overbuilding asks whether the dollars spent to build another network are necessary for the delivery of a communications service; economics asks whether spending those dollars will lead to competition that allows consumers to spend less and get more.
Sallet makes two rhetorical moves here to make his argument.
The first is redefining “overbuilding,” which refers to literally building a new network on top of (that is, “over”) previously built architecture, as a ploy by ISPs to avoid competition. But this is truly Orwellian. When a new entrant can build over an incumbent and take advantage of the first-mover’s investments to enter at a lower cost, a failure to compensate the first-mover is free riding. If the government compels such free riding, it reduces incentives for firms to make the initial investment to build the infrastructure.
The second is defining competition as the number of competitors, even if those competitors need to be subsidized by the government in order to enter the marketplace.
But there is no way to determine the “right” number of competitors in a given market in advance. In the real world, markets don’t match blackboard descriptions of perfect competition. In fact, there are sometimes high fixed costs which limit the number of firms which will likely exist in a competitive market. In some markets, known as natural monopolies, high infrastructural costs and other barriers to entry relative to the size of the market lead to a situation where it is cheaper for a monopoly to provide a good or service than multiple firms in a market. But it is important to note that only firms operating under market pressures can assess the viability of competition. This is why there is a significant risk in government subsidizing entry.
Competition drives sustained investment in the capital-intensive architecture of broadband networks, which suggests that ISPs are not natural monopolies. If they were, then having a monopoly provider regulated by the government to ensure the public interest, or government-run broadband companies, may make sense. In fact, Sallet denies ISPs are natural monopolies, stating that “the history of telecommunications regulation in the United States suggests that monopolies were a result of policy choices, not mandated by any iron law of economics” and “it would be odd for public policy to treat the creation of a monopoly as a success.”
As noted by economist George Ford in his study, The Impact of Government-Owned Broadband Networks on Private Investment and Consumer Welfare, unlike the threat of entry which often causes incumbents to act competitively even in the absence of competitors, the threat of subsidized entry reduces incentives for private entities to invest in those markets altogether. This includes both the incentive to build the network and update it. Subsidized entry may, in fact, tip the scales from competition that promotes consumer welfare to that which could harm it. If the market only profitably sustains one or two competitors, adding another through municipal broadband or subsidizing a new entrant may reduce the profitability of the incumbent(s) and eventually lead to exit. When this happens, only the government-run or subsidized network may survive because the subsidized entrant is shielded from the market test of profit-and-loss.
The “Donut Hole” Problem
The term “donut hole” is a final example to consider of how words can be used to confuse rather than enlighten in this debate.
There is broad agreement that to generate the positive externalities from universal service, there needs to be subsidies for buildout to high-cost rural areas. However, this seeming agreement masks vastly different approaches.
For instance, some critics of the current subsidy approach have identified a phenomenon where the city center has multiple competitive ISPs and government policy extends subsidies to ISPs to build out broadband coverage into rural areas, but there is relatively paltry Internet services in between due to a lack of private or public investment. They describe this as a “donut hole” because the “unserved” rural areas receive subsidies while “underserved” outlying parts immediately surrounding town centers receive nothing under current policy.
Conceptually, this is not a donut hole. It is actually more like a target or bullseye, where the city center is served by private investment and the rural areas receive subsidies to be served.
Indeed, there is a different use of the term donut hole, which describes how public investment in city centers can create a donut hole of funding needed to support rural build-out. Most Internet providers rely on profits from providing lower-cost service to higher-population areas (like city centers) to cross-subsidize the higher cost of providing service in outlying and rural areas. But municipal providers generally only provide municipal service — they only provide lower-cost service. This hits the carriers that serve higher-cost areas with a double whammy. First, every customer that municipal providers take from private carriers cuts the revenue that those carriers rely on to provide service elsewhere. Second, and even more problematic, because the municipal providers have lower costs (because they tend not to serve the higher-cost outlying areas), they can offer lower prices for service. This “competition” exerts downward pressure on the private firms’ prices, further reducing revenue across their entire in-town customer base.
This version of the “donut hole,” in which the revenues that private firms rely on from the city center to support the costs of providing service to outlying areas has two simultaneous effects. First, it directly reduces the funding available to serve more rural areas. And, second, it increases the average cost of providing service across its network (because it is no longer recovering as much of its costs from the lower-cost city core), which increases the prices that need to be charged to rural users in order to justify offering service at all.
Overcoming the problem of the rural digital divide starts with understanding why it exists. It is simply more expensive to build networks in areas with low population density. If universal service is the goal, subsidies, whether explicit subsidies from government or implicit cross-subsidies by broadband companies, are necessary to build out to these areas. But obfuscations about increasing “access to high-speed broadband” by promoting “competition” shouldn’t control the debate.
Instead, there needs to be a nuanced understanding of how government-subsidized entry into the broadband marketplace can discourage private investment and grow the size of the “donut hole,” thereby leading to demand for even greater subsidies. Policymakers should avoid exacerbating the digital divide by prioritizing subsidized competition over market processes.
This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.
In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.
“As usual,” Lord Macaulay wrote
in his history of England, “many persons” were “disposed to clamour against the
innovation, simply because it was an innovation.” They objected that the express
rides would corrupt traditional horsemanship, throw saddlers and boatmen out of
work, bankrupt the roadside taverns, and force travelers to sit with children
and the disabled. “It was gravely recommended,” reported Macaulay, by various
towns and companies, that “no public coach should be permitted to have more
than four horses, to start oftener that once a week, or to go more than thirty
miles a day.”
Macaulay used the episode to offer his
contemporaries a warning. Although “we smile at these things,” he said, “our
descendants, when they read the history of the opposition offered by cupidity
and prejudice to the improvements of the nineteenth century, may smile in their
turn.” Macaulay wanted the smart set to take a wider view of history.
They rarely do. It is not in their nature. As
Schumpeter understood, the “intellectual group” cannot help attacking “the
foundations of capitalist society.” “It lives on criticism and its whole
position depends on criticism that stings.”
An aspiring intellectual would do well to avoid restraint
or good cheer. Better to build on a foundation of panic and indignation. Want
to sell books and appear on television? Announce the “death” of this or a
“crisis” over that. Want to seem fashionable among other writers, artists, and
academics? Denounce greed and rail against “the system.”
New technology is always a good target. When a
lantern inventor obtained a patent to light London, observed Macaulay, “the
cause of darkness was not left undefended.” The learned technophobes have been especially
vexed lately. The largest tech companies, they protest, are manipulating us.
“remade the internet in its hideous image.” The
New Yorker wonders
whether the platform is going to “break democracy.”
Apple is no better. “Have smartphones destroyed a
generation?” asksThe Atlantic in a cover-story
headline. The article’s author, Jean Twenge, says smartphones have made the
young less independent, more reclusive, and more depressed. She claims that
today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis
in decades.” “Much of this deterioration,” she contends, “can be traced to
And then there’s Amazon. It’s too efficient. Alex
in Fortune that “too many clicks, too
much time spent, and too much money spent on Amazon” is “bad for our collective
financial, psychological, and physical health.”
Here’s a rule of thumb for the refined cultural
critic to ponder. When the talking points you use to convey your depth and perspicacity
match those of a sermonizing Republican senator, start worrying that your pseudo-profound
TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn
fears of novelty and change.
Enter Josh Hawley, freshman GOP senator from
Missouri. Hawley claims
that Facebook is a “digital drug” that “dulls” attention spans and “frays”
relationships. He speculates about whether social media is causing teenage
girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists,
is “ever more sophisticated exploitation of people.” He scolds the tech
companies for failing to produce products that—in his judgment—“enrich lives” and
As for the stuff the industry does make, Hawley wants
it changed. He has introduced
a bill to ban infinite scrolling, music and video autoplay, and the use of “badges
and other awards” (gamification) on social media. The bill also requires defaults
that limit a user’s time on a platform to 30 minutes a day. A user could opt
out of this restriction, but only for a month at a stretch.
The available evidence does not bear out the notion
that highbrow magazines, let alone Josh Hawley, should redesign tech products
and police how people use their time. You’d probably have to pay
someone around $500 to stay off Facebook for a year.
Getting her to forego using Amazon would cost even more. And Google is worth
more still—perhaps thousands of dollars per user per year. These figures are of
course quite rough, but that just proves the point: the consumer surplus created
by the internet is inestimable.
Is technology making teenagers sad? Probably not. A
recent study tracked the social-media use, along with the wellbeing, of around
ten-thousand British children for almost a decade. “In more than half of the
thousands of statistical models we tested,” the study’s authors write,
“we found nothing more than random statistical noise.” Although there were some
small links between teenage girls’ mood and their social-media use, the
connections were “miniscule” and too “trivial” to “inform personal parenting
decisions.” “It’s probably best,” the researchers conclude, “to retire the idea
that the amount of time teens spend on social media is a meaningful metric
influencing their wellbeing.”
One could head the other way, in fact, and argue
that technology is making children smarter. Surfing the web and playing video
broaden their attention spans and improve their abstract thinking.
Is Facebook a threat to democracy? Not yet. The
memes that Russian trolls distributed during the 2016 election were clumsy,
garish, illiterate piffle. Most of it was the kind of thing that only an Alex
Jones fan or a QAnon conspiracist would take seriously. And sure enough, one
study finds that only a
tiny fraction of voters, most of them older
conservatives, read and spread the material. It appears, in other words, that the
Russian fake news and propaganda just bounced
around among a few wingnuts whose support for Donald
Trump was never in doubt.
Over time, it is fair to say, the known costs and
benefits of the latest technological innovations could change. New data and
further study might reveal that the handwringers are on to something. But there’s
good news: if you have fears, doubts, or objections, nothing stops you from
acting on them. If you believe that Facebook’s behavior
is intolerable, or that its impact on society is malign, stop using it. If you
think Amazon is undermining small businesses, shop more at local stores. If you
fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you
suspect that everything has gone pear-shaped since the Industrial Revolution
started, throw out your refrigerator and stop going to the dentist.
We now hit the crux of the intellectuals’ (and Josh
Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim,
weak, and base. You lack the wits to use an iPhone on your own terms. You lack
the self-control to post, “like”, and share in moderation (or the discipline to
make your children follow suit). You lack the virtue to abstain from the
pleasures of Prime-membership consumerism.
One AI researcher digs to the root. “It is only the
hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or
‘I’m not on social media,’” she tellsVox. No one wields the “privilege” epithet
quite like the modern privileged do. It is one of the remarkable features of
our time. Pundits and professors use the word to announce, albeit
unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection
from too much information, too much choice, too much freedom.
There’s nothing crazy about wanting the new aristocrats
of the mind to shepherd everyone else. Noblesse
oblige is a venerable concept. The lords care for the peasants, the king
cares for the lords, God cares for the king. But that is not our arrangement.
Our forebears embraced the Enlightenment. They began with the assumption that citizens
are autonomous. They got suspicious whenever the holders of political power
started trying to tell those citizens what they can and cannot do.
Algorithms might one day expose, and play on, our
innate lack of free will so much that serious legal and societal adjustments
are needed. That, however, is a remote and hypothetical issue, one likely to fall
on a generation, yet unborn, who will smile in their turn at our qualms.
(Before you place much weight on more dramatic predictions, consider that the great
Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)
The question today is more mundane: do voters crave
moral direction from their betters? Are they clamoring to be viewed as lowly
creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly
capable of debasing themselves accordingly through their choice of political representatives.
Judging from Congress’s flat response to Hawley’s bill, the electorate is not
quite there yet.
In the meantime, the great and the good might reevaluate
their campaign to infantilize their less fortunate brothers and sisters.
Lecturing people about how helpless they are is not deep. It’s not cool. It’s
condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned
In 1816 The
Times of London warned “every parent against exposing his daughter to so
fatal a contagion” as . . . the waltz. “The novelty is one deserving
of severe reprobation,” Britain’s paper of record intoned, “and we trust it
will never again be tolerated in any moral English society.”
There was a time, Lord Macaulay felt sure, when
some brahmin or other looked down his nose at the plough and the alphabet.
I have reproduced the principles below, and they are available online (along with the list of signatories) here.
Section 230 holds those who create content online responsible for that content and, controversially today, protects online intermediaries from liability for content generated by third parties except in specific circumstances. Advocates on both the political right and left have recently begun to argue more ardently for the repeal or at least reform of Section 230.
There is always good reason to consider whether decades-old laws, especially those aimed at rapidly evolving industries, should be updated to reflect both changed circumstances as well as new learning. But discussions over whether and how to reform (or repeal) Section 230 have, thus far, offered far more heat than light.
Indeed, later today President Trump will hold a “social media summit” at the White House to which he has apparently invited a number of right-wing political firebrands — but no Internet policy experts or scholars and no representatives of social media firms. Nothing about the event suggests it will produce — or even aim at — meaningful analysis of the issues. Trump himself has already concluded about social media platforms that “[w]hat they are doing is wrong and possibly illegal.” On the basis of that (legally baseless) conclusion, “a lot of things are being looked at right now.” This is not how good policy decisions are made.
The principles we published today are intended to foster an environment in which discussion over these contentious questions may actually be fruitfully pursued. But they also sound a cautionary note. As we write in the preamble to the principles:
[W]e value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall.
Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.
Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.
As the diversity of signatories to these principles demonstrates, there is room for reasoned discussion among Section 230 advocates and skeptics alike. For some, these principles represent a significant move from their initial, hard-line positions, undertaken in the face of the very real risk that if they give an inch others will latch on to their concessions to take a mile. They should be commended for their willingness nevertheless to engage seriously with the issues — and, challenged with de-politicized, sincere, and serious discussion, to change their minds.
Everyone who thinks seriously about the issues implicated by Section 230 can — or should — agree that it has been instrumental in the birth and growth of the Internet as we know it: both the immense good and the unintended bad. That recognition does not lead inexorably to one conclusion or another regarding the future of Section 230, however.
Ensuring that what comes next successfully confronts the problems without negating the benefits starts with the recognition that both costs and benefits exist, and that navigating the trade-offs is a fraught endeavor that absolutely will not be accomplished by romanticized assumptions and simplistic solutions. Efforts to update Section 230 should not deny its past successes, ignore reasonable concerns, nor co-opt the process to score costly political points.
But that’s just the start. It’s incumbent upon those seeking to reform Section 230 to offer ideas and proposals that reflect the reality and the complexity of the world they seek to regulate. The basic principles we published today offer a set of minimum reasonable guardrails for those efforts. Adherence to these principles would allow plenty of scope for potential reforms while helping to ensure that they don’t throw out the baby with the bathwater.
Accepting, for example, the reality that a “neutral” presentation of content is impossible (Principle #4) and that platform moderation is complicated and also essential for civil discourse (Principle #3) means admitting that a preferred standard of content moderation is just that — a preference. It may be defensible to impose that preference on all platforms by operation of law, and these principles do not obviate such a position. But they do demand that such an opinion be rigorously defended. It is insufficient simply to call for “neutrality” or any other standard (“all ideological voices must be equally represented,” e.g.) without making a valid case for what would be gained and why the inevitable costs and trade-offs would nevertheless be worthwhile.
All of us who drafted and/or signed these principles are willing to come to the table to discuss good-faith efforts to reform Section 230 that recognize and reasonably account for such trade-offs. It remains to be seen whether we can wrest the process from those who would use it to promote their static, unrealistic, and politicized vision at the expense of everyone else.
Liability for User-Generated Content Online:
Principles for Lawmakers
Policymakers have expressed concern about both harmful online speech and the content moderation practices of tech companies. Section 230, enacted as part of the bipartisan Communications Decency Act of 1996, says that Internet services, or “intermediaries,” are not liable for illegal third-party content except with respect to intellectual property, federal criminal prosecutions, communications privacy (ECPA), and sex trafficking (FOSTA). Of course, Internet services remain responsible for content they themselves create.
As civil society organizations, academics, and other experts who study the regulation of user-generated content, we value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall. We hope the following principles help any policymakers considering amendments to Section 230.
Principle #1: Content creators bear primary responsibility for their speech and actions.
Content creators — including online services themselves — bear primary responsibility for their own content and actions. Section 230 has never interfered with holding content creators liable. Instead, Section 230 restricts only who can be liable for the harmful content created by others.
Law enforcement online is as important as it is offline. If policymakers believe existing law does not adequately deter bad actors online, they should (i) invest more in the enforcement of existing laws, and (ii) identify and remove obstacles to the enforcement of existing laws. Importantly, while anonymity online can certainly constrain the ability to hold users accountable for their content and actions, courts and litigants have tools to pierce anonymity. And in the rare situation where truly egregious online conduct simply isn’t covered by existing criminal law, the law could be expanded. But if policymakers want to avoid chilling American entrepreneurship, it’s crucial to avoid imposing criminal liability on online intermediaries or their executives for unlawful user-generated content.
Principle #2: Any new intermediary liability must not target constitutionally protected speech.
The government shouldn’t require — or coerce — intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship — or even avoid offering speech forums altogether.
Principle #3: The law shouldn’t discourage Internet services from moderating content.
To flourish, the Internet requires that site managers have the ability to remove legal but objectionable content — including content that would be protected under the First Amendment from censorship by the government. If Internet services could not prohibit harassment, pornography, racial slurs, and other lawful but offensive or damaging material, they couldn’t facilitate civil discourse. Even when Internet services have the ability to moderate content, their moderation efforts will always be imperfect given the vast scale of even relatively small sites and the speed with which content is posted. Section 230 ensures that Internet services can carry out this socially beneficial but error-prone work without exposing themselves to increased liability; penalizing them for imperfect content moderation or second-guessing their decision-making will only discourage them from trying in the first place. This vital principle should remain intact.
Principle #4: Section 230 does not, and should not, require “neutrality.”
Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, deprioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.
Principle #5: We need a uniform national legal standard.
Most Internet services cannot publish content on a state-by-state basis, so state-by-state variations in liability would force compliance with the most restrictive legal standard. In its current form, Section 230 prevents this dilemma by setting a consistent national standard — which includes potential liability under the uniform body of federal criminal law. Internet services, especially smaller companies and new entrants, would find it difficult, if not impossible, to manage the costs and legal risks of facing potential liability under state civil law, or of bearing the risk of prosecution under state criminal law.
Principle #6: We must continue to promote innovation on the Internet.
Section 230 encourages innovation in Internet services, especially by smaller services and start-ups who need the most protection from potentially crushing liability. The law must continue to protect intermediaries not merely from liability, but from having to defend against excessive, often-meritless suits — what one court called “death by ten thousand duck-bites.” Without such protection, compliance, implementation, and litigation costs could strangle smaller companies before they even emerge, while larger, incumbent technology companies would be much better positioned to absorb these costs. Any proposal to reform Section 230 that is calibrated to what might be possible for the Internet giants will necessarily mis-calibrate the law for smaller services.
Principle #7: Section 230 should apply equally across a broad spectrum of online services.
Section 230 applies to services that users never interact with directly. The further removed an Internet service — such as a DDOS protection provider or domain name registrar — is from an offending user’s content or actions, the more blunt its tools to combat objectionable content become. Unlike social media companies or other user-facing services, infrastructure providers cannot take measures like removing individual posts or comments. Instead, they can only shutter entire sites or services, thus risking significant collateral damage to inoffensive or harmless content. Requirements drafted with user-facing services in mind will likely not work for these non-user-facing services.
In an ideal world, the discussion and debate about how (or if) to tax e-cigarettes, heat-not-burn, and other tobacco harm-reduction products would be guided by science. Policy makers would confer with experts, analyze evidence, and craft prudent and sensible laws and regulations.
In the real world, however, politicians are guided by other factors.
There are two things to understand, both of which are based on my conversations with policy staff in Washington and elsewhere.
First, this is a battle over tax revenue. Politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.
This is very much akin to the concern that electric cars and fuel-efficient cars will lead to a loss of money from excise taxes on gasoline.
In the case of fuel taxes, politicians are anxiously looking at other sources of revenue, such as miles-driven levies. Their main goal is to maintain – or preferably increase – the amount of money that is diverted to the redistributive state so that politicians can reward various interest groups.
In the case of tobacco, a reduction in the number of smokers (or the tax-driven propensity of smokers to seek out black-market cigarettes) is leading politicians to concoct new schemes for taxing e-cigarettes and related non-combustible products.
Second, this is a quasi-ideological fight. Not about capitalism versus socialism, or big government versus small government. It’s basically a fight over paternalism, or a battle over goals.
For all intents and purposes, the question is whether lawmakers should seek to simultaneously discourage both tobacco use and vaping because both carry some risk (and perhaps because both are considered vices for the lower classes)? Or should they welcome vaping since it leads to harm reduction as smokers shift to a dramatically safer way of consuming nicotine?
In statistics, researchers presumably always recognize the dangers of certain types of mistakes, known as Type I errors (also known as a “false positive”) and Type II errors (also known as a “false negative”).
How does this relate to smoking, vaping, and taxes?
Simply stated, both sides of the fight are focused on a key goal and secondary issues are pushed aside. In other words, tradeoffs are being ignored.
The advocates of high taxes on e-cigarettes and other non-combustible products are fixated on the possibility that vaping will entice some people into the market. Maybe vaping wil even act as a gateway to smoking. So, they want high taxes on vaping, akin to high taxes on tobacco, even though the net result is that this leads many smokers to stick with cigarettes instead of making a switch to less harmful products.
For all intents and purposes, the fight over the taxation of vaping is similar to other ideological fights.
The old joke in Washington is that a conservative is someone who will jail 99 innocent people in order to put one crook in prison and a liberal is someone who will free 99 guilty people to prevent one innocent person from being convicted (or, if you prefer, a conservative will deny 99 poor people to catch one welfare fraudster and a liberal will line the pockets of 99 fraudsters to make sure one genuinely poor person gets money).
The vaping fight hasn’t quite reached this stage, but the battle lines are very familiar. At some point in the future, observers may joke that one side is willing to accept more smoking if one teenager forgoes vaping while the other side is willing to have lots of vapers if it means one less smoker.
Having explained the real drivers of this debate, I’ll close by injecting my two cents and explaining why the paternalists are wrong. But rather than focus on libertarian-type arguments about personal liberty, I’ll rely on three points, all of which are based on conventional cost-benefit analysis and the sensible approach to excise taxation.
First, tax policy should focus on incentivizing a switch and not punishing those who chose a less harmful products. The goal should be harm reduction rather than revenue maximization.
Second, low tax burdens also translate into lower long-run spending burdens because a shift to vaping means a reduction in overall healthcare costs related to smoking cigarettes.
Third, it makes no sense to impose punitive “sin taxes” on behaviors that are much less, well, sinful. There’s a big difference in the health and fiscal impact of cigarettes compared to the alternatives.
One final point is that this issue has a reverse-class-warfare component. Anti-smoking activists generally have succeeded in stigmatizing cigarette consumption and most smokers are now disproportionately from the lower-income community. For better (harm reduction) or worse (elitism), low-income smokers are generally treated with disdain for their lifestyle choices.
It is not an explicit policy, but that disdain now seems to extend to any form of nicotine consumption, even though the health effects of vaping are vastly lower.