Archives For cost-benefit analysis

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

One of the themes that has run throughout this symposium has been that, throughout his tenure as both a commissioner and as chairman, Ajit Pai has brought consistency and careful analysis to the Federal Communications Commission (McDowell, Wright). The reflections offered by the various authors in this symposium make one thing clear: the next administration would do well to learn from the considered, bipartisan, and transparent approach to policy that characterized Chairman Pai’s tenure at the FCC.

The following are some of the more specific lessons that can be learned from Chairman Pai. In an important sense, he laid the groundwork for his successful chairmanship when he was still a minority commissioner. His thoughtful dissents were rooted in consistent, clear policy arguments—a practice that both charted how he would look at future issues as chairman and would help the public to understand exactly how he would approach new challenges before the FCC (McDowell, Wright).

One of the most public instances of Chairman Pai’s consistency (and, as it turns out, his bravery) was with respect to net neutrality. From his dissent in the Title II Order, through his commission’s Restoring Internet Freedom Order, Chairman Pai focused on the actual welfare of consumers and the factors that drive network growth and adoption. As Brent Skorup noted, “Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition.” The result of giving in to the Title II advocates would have been to draw the FCC into a quagmire of mass-media regulation that would ultimately harm free expression and broadband deployment in the United States.

Chairman Pai’s vision worked out (Skorup, May, Manne, Hazlett). Despite prognostications of the “death of the internet” because of the Restoring Internet Freedom Order, available evidence suggests that industry investment grew over Chairman Pai’s term. More Americans are connected to broadband than ever before.

Relatedly, Chairman Pai was a strong supporter of liberalizing media-ownership rules that long had been rooted in 20th century notions of competition (Manne). Such rules systematically make it harder for smaller media outlets to compete with large news aggregators and social-media platforms. As Geoffrey Manne notes: 

Consistent with his unwavering commitment to promote media competition… Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers.

This was a bold move for Chairman Pai—in essence, he permitted more local concentration by, e.g., allowing the purchase of a newspaper by a local television station that previously would have been forbidden. By allowing such combinations, the FCC enabled failing local news outlets to shore up their losses and continue to compete against larger, better-resourced organizations. The rule changes are in a case pending before the Supreme Court; should the court find for the FCC, the competitive outlook for local media looks much better thanks to Chairman Pai’s vision.

Chairman Pai’s record on spectrum is likewise impressive (Cooper, Hazlett). The FCC’s auctions under Chairman Pai raised more money and freed more spectrum for higher value uses than any previous commission (Feld, Hazlett). But there is also a lesson in how subsequent administrations can continue what Chairman Pai started. Unlicensed use, for instance, is not free or costless in its maintenance, and Tom Hazlett believes that there is more work to be done in further liberalizing access to the related spectrum—liberalizing in the sense of allowing property rights and market processes to guide spectrum to its highest use:

The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models.

And to a large extent this is the model that Chairman Pai set down, from the issuance of the 12 GHZ NPRM to consider whether those spectrum bands could be opened up for wireless use, to the L-Band Order, where the commission worked hard to reallocate spectrum rights in ways that would facilitate more productive uses.

The controversial L-Band Order was another example of where Chairman Pai displayed both political acumen as well as an apolitical focus on improving spectrum policy (Cooper). Political opposition was sharp and focused after the commission finalized its order in April 2020. Nonetheless, Chairman Pai was deftly able to shepherd the L-Band Order and guarantee that important spectrum was made available for commercial wireless use.

As a native of Kansas, rural broadband rollout ranked highly in the list of priorities at the Pai FCC, and his work over the last four years is demonstrative of this pride of place (Hurwitz, Wright). As Gus Hurwitz notes, “the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity.”

Further, other work, like the recently completed Rural Digital Opportunity Fund auction and the 5G fund provide the necessary policy framework with which to extend greater connectivity to rural America. As Josh Wright notes, “Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind.” This focus on closing the digital divide yielded gains in connectivity in places outside of traditional rural American settings, such as tribal lands, the U.S. Virgin Islands, and Puerto Rico (Wright).

But perhaps one of Chairman Pai’s best and (hopefully) most lasting contributions will be de-politicizing the FCC and increasing the transparency with which it operated. In contrast to previous administrations, the Pai FCC had an overwhelmingly bipartisan nature, with many bipartisan votes being regularly taken at monthly meetings (Jamison). In important respects, it was this bipartisan (or nonpartisan) nature that was directly implicated by Chairman Pai championing the Office of Economics and Analytics at the commission. As many of the commentators have noted (Jamison, Hazlett, Wright, Ellig) the OEA was a step forward in nonpolitical, careful cost-benefit analysis at the commission. As Wright notes, Chairman Pai was careful to not just hire a bunch of economists, but rather to learn from other agencies that have better integrated economics, and to establish a structure that would enable the commission’s economists to materially contribute to better policy.

We were honored to receive a post from Jerry Ellig just a day before he tragically passed away. As chief economist at the FCC from 2017-2018, he was in a unique position to evaluate past practice and participate in the creation of the OEA. According to Ellig, past practice tended to treat the work of the commission’s economists as a post-hoc gloss on the work of the agency’s attorneys. Once conclusions were reached, economics would often be backfilled in to support those conclusions. With the establishment of the OEA, economics took a front-seat role, with staff of that office becoming a primary source for information and policy analysis before conclusions were reached. As Wright noted, the Federal Trade Commission had adopted this approach. With the FCC moving to do this as well, communications policy in the United States is on much sounder footing thanks to Chairman Pai.

Not only did Chairman Pai push the commission in the direction of nonpolitical, sound economic analysis but, as many commentators note, he significantly improved the process at the commission (Cooper, Jamison, Lyons). Chief among his contributions was making it a practice to publish proposed orders weeks in advance, breaking with past traditions of secrecy around draft orders, and thereby giving the public an opportunity to see what the commission intended to do.

Critics of Chairman Pai’s approach to transparency feared that allowing more public view into the process would chill negotiations between the commissioners behind the scenes. But as Daniel Lyons notes, the chairman’s approach was a smashing success:

The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan compared to 33% and 69.9%, respectively, under Chairman Wheeler.

Other reforms from Chairman Pai helped open the FCC to greater scrutiny and a more transparent process, including limiting editorial privileges on staff on an order’s text, and by introducing the use of a simple “fact sheet” to explain orders (Lyons).

I found one of the most interesting insights into the character of Chairman Pai, was his willingness to reverse course and take risks to ensure that the FCC promoted innovation instead of obstructing it by relying on received wisdom (Nachbar). For instance, although he was initially skeptical of the prospects of Space X to introduce broadband through its low-Earth-orbit satellite systems, under Chairman Pai, the Starlink beta program was included in the RDOF auction. It is not clear whether this was a good bet, Thomas Nachbar notes, but it was a statement both of the chairman’s willingness to change his mind, as well as to not allow policy to remain in a comfortable zone that excludes potential innovation.

The next chair has an awfully big pair of shoes (or one oversized coffee mug) to fill. Chairman Pai established an important legacy of transparency and process improvement, as well as commitment to careful, economic analysis in the business of the agency. We will all be well-served if future commissions follow in his footsteps.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Jerry Ellig was a research professor at The George Washington University Regulatory Studies Center and served as chief economist at the Federal Communications Commission from 2017 to 2018. Tragically, he passed away Jan. 20, 2021. TOTM is honored to publish his contribution to this symposium.]

One significant aspect of Chairman Ajit Pai’s legacy is not a policy change, but an organizational one: establishment of the Federal Communications Commission’s (FCC’s) Office of Economics and Analytics (OEA) in 2018.

Prior to OEA, most of the FCC’s economists were assigned to the various policy bureaus, such as Wireless, Wireline Competition, Public Safety, Media, and International. Each of these bureaus had its own chief economist, but the rank-and-file economists reported to the managers who ran the bureaus – usually attorneys who also developed policy and wrote regulations. In the words of former FCC Chief Economist Thomas Hazlett, the FCC had “no location anywhere in the organizational structure devoted primarily to economic analysis.”

Establishment of OEA involved four significant changes. First, most of the FCC’s economists (along with data strategists and auction specialists) are now grouped together into an organization separate from the policy bureaus, and they are managed by other economists. Second, the FCC rules establishing the new office tasked OEA with reviewing every rulemaking, reviewing every other item with economic content that comes before the commission for a vote, and preparing a full benefit-cost analysis for any regulation with $100 million or more in annual economic impact. Third, a joint memo from the FCC’s Office of General Counsel and OEA specifies that economists are to be involved in the early stages of all rulemakings. Fourth, the memo also indicates that FCC regulatory analysis should follow the principles articulated in Executive Order 12866 and Office of Management and Budget Circular A-4 (while specifying that the FCC, as an independent agency, is not bound by the executive order).

While this structure for managing economists was new for the FCC, it is hardly uncommon in federal regulatory agencies. Numerous independent agencies that deal with economic regulation house their economists in a separate bureau or office, including the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Surface Transportation Board, the Office of Comptroller of the Currency, and the Federal Trade Commission. The SEC displays even more parallels with the FCC. A guidance memo adopted in 2012 by the SEC’s Office of General Counsel and Division of Risk, Strategy and Financial Innovation (the name of the division where economists and other analysts were located) specifies that economists are to be involved in the early stages of all rulemakings and articulates best analytical practices based on Executive Order 12866 and Circular A-4.

A separate economics office offers several advantages over the FCC’s prior approach. It gives the economists greater freedom to offer frank advice, enables them to conduct higher-quality analysis more consistent with the norms of their profession, and may ultimately make it easier to uphold FCC rules that are challenged in court.

Independence.  When I served as chief economist at the FCC in 2017-2018, I gathered from conversations that the most common practice in the past was for attorneys who wrote rules to turn to economists for supporting analysis after key decisions had already been made. This was not always the process, but it often occurred. The internal working group of senior FCC career staff who drafted the plan for OEA reached similar conclusions. After the establishment of OEA, an FCC economist I interviewed noted how his role had changed: “My job used to be to support the policy decisions made in the chairman’s office. Now I’m much freer to speak my own mind.”

Ensuring economists’ independence is not a problem unique to the FCC. In a 2017 study, Stuart Shapiro found that most of the high-level economists he interviewed who worked on regulatory impact analyses in federal agencies perceive that economists can be more objective if they are located outside the program office that develops the regulations they are analyzing. As one put it, “It’s very difficult to conduct a BCA [benefit-cost analysis] if our boss wrote what you are analyzing.” Interviews with senior economists and non-economists who work on regulation that I conducted for an Administrative Conference of the United States project in 2019 revealed similar conclusions across federal agencies. Economists located in organizations separate from the program office said that structure gave them greater independence and ability to develop better analytical methodologies. On the other hand, economists located in program offices said they experienced or knew of instances where they were pressured or told to produce an analysis with the results decision-makers wanted.

The FTC provides an informative case study. From 1955-1961, many of the FTC’s economists reported to the attorneys who conducted antitrust cases; in 1961, they were moved into a separate Bureau of Economics. Fritz Mueller, the FTC chief economist responsible for moving the antitrust economists back into the Bureau of Economics, noted that they were originally placed under the antitrust attorneys because the attorneys wanted more control over the economic analysis. A 2015 evaluation by the FTC’s Inspector General concluded that the Bureau of Economics’ existence as a separate organization improves its ability to offer “unbiased and sound economic analysis to support decision-making.”

Higher-quality analysis. An issue closely related to economists’ independence is the quality of the economic analysis. Executive branch regulatory economists interviewed by Richard Williams expressed concern that the economic analysis was more likely to be changed to support decisions when the economists are located in the program office that writes the regulations. More generally, a study that Catherine Konieczny and I conducted while we were at the FCC found that executive branch agencies are more likely to produce higher-quality regulatory impact analyses if the economists responsible for the analysis are in an independent economics office rather than the program office.

Upholding regulations in court. In Michigan v. EPA, the Supreme Court held that it is unreasonable for agencies to refuse to consider regulatory costs if the authorizing statute does not prohibit them from doing so. This precedent will likely increase judicial expectations that agencies will consider economic issues when they issue regulations. The FCC’s OGC-OEA memo cites examples of cases where the quality of the FCC’s economic analysis either helped or harmed the commission’s ability to survive legal challenge under the Administrative Procedure Act’s “arbitrary and capricious” standard. More systematically, a recent Regulatory Studies Center working paper finds that a higher-quality economic analysis accompanying a regulation reduces the likelihood that courts will strike down the regulation, provided that the agency explains how it used the analysis in decisions.

Two potential disadvantages of a separate economics office are that it may make the economists easier to ignore (what former FCC Chief Economist Tim Brennan calls the “Siberia effect”) and may lead the economists to produce research that is less relevant to the practical policy concerns of the policymaking bureaus. The FCC’s reorganization plan took these disadvantages seriously.

To ensure that the ultimate decision-makers—the commissioners—have access to the economists’ analysis and recommendations, the rules establishing the office give OEA explicit responsibility for reviewing all items with economic content that come before the commission. Each item is accompanied by a cover memo that indicates whether OEA believes there are any significant issues, and whether they have been dealt with adequately. To ensure that economists and policy bureaus work together from the outset of regulatory initiatives, the OGC-OEA memo instructs:

Bureaus and Offices should, to the extent practicable, coordinate with OEA in the early stages of all Commission-level and major Bureau-level proceedings that are likely to draw scrutiny due to their economic impact. Such coordination will help promote productive communication and avoid delays from the need to incorporate additional analysis or other content late in the drafting process. In the earliest stages of the rulemaking process, economists and related staff will work with programmatic staff to help frame key questions, which may include drafting options memos with the lead Bureau or Office.

While presiding over his final commission meeting on Jan. 13, Pai commented, “It’s second nature now for all of us to ask, ‘What do the economists think?’” The real test of this institutional innovation will be whether that practice continues under a new chair in the next administration.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Joshua D. Wright is university professor and executive director of the Global Antitrust Institute at George Mason University’s Scalia Law School. He served as a commissioner of the Federal Trade Commission from 2013 through 2015.]

Much of this symposium celebrates Ajit’s contributions as chairman of the Federal Communications Commission and his accomplishments and leadership in that role. And rightly so. But Commissioner Pai, not just Chairman Pai, should also be recognized.

I first met Ajit when we were both minority commissioners at our respective agencies: the FCC and Federal Trade Commission. Ajit had started several months before I was confirmed. I watched his performance in the minority with great admiration. He reached new heights when he shifted from minority commissioner to chairman, and the accolades he will receive for that work are quite appropriate. But I want to touch on his time as a minority commissioner at the FCC and how that should inform the retrospective of his tenure.

Let me not bury the lead: Ajit Pai has been, in my view, the most successful, impactful minority commissioner in the history of the modern regulatory state. And it is that success that has led him to become the most successful and impactful chairman, too.

I must admit all of this success makes me insanely jealous. My tenure as a minority commissioner ran in parallel with Ajit. We joked together about our fierce duel to be the reigning king of regulatory dissents. We worked together fighting against net neutrality. We compared notes on dissenting statements and opinions. I tried to win our friendly competition. I tried pretty hard. And I lost; worse than I care to admit. But we had fun. And I very much admired the combination of analytical rigor, clarity of exposition, and intellectual honesty in his work. Anyway, the jealousy would be all too much if he weren’t also a remarkable person and friend.

The life of a minority commissioner can be a frustrating one. Like Sisyphus, the minority commissioner often wakes up each day to roll the regulatory (well, in this case, deregulatory) boulder up the hill, only to watch it roll down. And then do it again. And again. At times, it is an exhausting series of jousting matches with the windmills of Washington bureaucracy. It is not often that a minority commissioner has as much success as Commissioner Pai did: dissenting opinions ultimately vindicated by judicial review; substantive victories on critical policy issues; paving the way for institutional and procedural reforms.

It is one thing to write a raging dissent about how the majority has lost all principles. Fire and brimstone come cheap when there aren’t too many consequences to what you have to say. Measure a man after he has been granted power and a chance to use it, and only then will you have a true test of character. Ajit passes that test like few in government ever have.

This is part of what makes Ajit Pai so impressive. I have seen his work firsthand. The multitude of successes Ajit achieved as Chairman Pai were predictable, precisely because Commissioner Pai told the world exactly where he stood on important telecommunications policy issues, the reasons why he stood there, and then, well, he did what he said he would. The Pai regime was much more like a Le’Veon Bell run, between the tackles, than a no-look pass from Patrick Mahomes to Tyreek Hill. Commissioner Pai shared his playbook with the world; he told us exactly where he was going to run the ball. And then Chairman Pai did exactly that. And neither bureaucratic red tape nor political pressure—or even physical threat—could stop him.

Here is a small sampling of his contributions, many of them building on groundwork he laid in the minority:

Focus on Economic Analysis

One of Chairman Pai’s most important contributions to the FCC is his work to systematically incorporate economic analysis into FCC decision-making. The triumph of this effort was establishing the Office of Economic Analysis (OEA) in 2018. The OEA focus on conducting economic analyses of the costs, benefits, and economic impacts of the commission’s proposed rules will be a critical part of agency decision-making from here on out. This act alone would form a legacy any agency head could easily rest their laurels on. The OEA’s work will shape the agency for decades and ensure that agency decisions are made with the oversight economics provides.

This is a hard thing to do; just hiring economists is not enough. Structure matters. How economists get information to decision-makers determines if it will be taken seriously. To this end, Ajit has taken all the lessons from what has made the economists at the FTC so successful—and the lessons from the structural failures at other agencies—and applied them at the FCC.

Structural independence looks like “involving economists on cross-functional teams at the outset and allowing the economics division to make its own, independent recommendations to decision-makers.”[1] And it is necessary for economics to be taken seriously within an agency structure. Ajit has assured that FCC decision-making will benefit from economic analysis for years to come.

Narrowing the Digital Divide

Chairman Pai made helping the disadvantaged get connected to the internet and narrowing the digital divide the top priorities during his tenure. And Commissioner Pai was fighting for this long before the pandemic started.

As businesses, schools, work, and even health care have moved online, the need to get Americans connected with high-speed broadband has never been greater. Under Pai’s leadership, the FCC has removed bureaucratic barriers[2] and provided billions in funding[3] to facilitate rural broadband buildout. We are talking about connections to some 700,000 rural homes and businesses in 45 states, many of whom are gaining access to high-speed internet for the first time.

Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind. Tribal communities,[4] particularly in the rural West, have been a keen focus of his, as he knows all-too-well the difficulties and increased costs associated with servicing those lands. He established programs to rebuild and expand networks in the Virgin Islands and Puerto Rico[5] in an effort to bring the islands to parity with citizens living on the mainland.

You need not take my word for it; he really does talk about this all the time. As he said in a speech at the National Tribal Broadband Summit: “Since my first day in this job, I’ve said that closing the digital divide was my top priority. And as this audience knows all too well, nowhere is that divide more pronounced than on Tribal lands.“ That work is not done; it is beyond any one person. But Ajit should be recognized for his work bridging the divide and laying the foundation for future gains.

And again, this work started as minority commissioner. Before he was chairman, Pai proposed projects for rural broadband development; he frequently toured underserved states and communities; and he proposed legislation to offer the 21st century promise to economically depressed areas of the country. Looking at Chairman Pai is only half the picture.

Keeping Americans Connected

One would not think that the head of the Federal Communications Commission would be a leader on important health-care issues, but Ajit has made a real difference here too. One of his major initiatives has been the development of telemedicine solutions to expand access to care in critical communities.

Beyond encouraging buildout of networks in less-connected areas, Pai’s FCC has also worked to allocate funding for health-care providers and educational institutions who were navigating the transition to remote services. He ensured that health-care providers’ telecommunications and information services were funded. He worked with the U.S. Department of Education to direct funds for education stabilization and allowed schools to purchase additional bandwidth. And he granted temporary additional spectrum usage to broadband providers to meet the increased demand upon our nation’s networks. Oh, and his Keep Americans Connected Pledge gathered commitment from more than 800 companies to ensure that Americans would not lose their connectivity due to pandemic-related circumstances. As if the list were not long enough, Congress’ January coronavirus relief package will ensure that these and other programs, like Rip and Replace, will remain funded for the foreseeable future.

I might sound like I am beating a dead horse here, but the seeds of this, too, were laid in his work in the minority. Here he is describing his work in a 2015 interview, as a minority commissioner:

My own father is a physician in rural Kansas, and I remember him heading out in his car to visit the small towns that lay 40 miles or more from home. When he was there, he could provide care for people who would otherwise never see a specialist at all. I sometimes wonder, back in the 1970s and 1980s, how much easier it would have been on patients, and him, if broadband had been available so he could provide healthcare online.

Agency Transparency and Democratization

Many minority commissioners like to harp on agency transparency. Some take a different view when they are in charge. But Ajit made good on his complaints about agency transparency when he became Chairman Pai. He did this through circulating draft items well in advance of monthly open meetings, giving people the opportunity to know what the agency was voting on.

You used to need a direct connection with the FCC to even be aware of what orders were being discussed—the worst of the D.C. swamp—but now anyone can read about the working items, in clear language.

These moves toward a more transparent, accessible FCC dispel the impression that the agency is run by Washington insiders who are disconnected from the average person. The meetings may well be dry and technical—they really are—but Chairman Pai’s statements are not only good-natured and humorous, but informative and substantive. The public has been well-served by his efforts here.

Incentivizing Innovation and Next-Generation Technologies

Chairman Pai will be remembered for his encouragement of innovation. Under his chairmanship, the FCC discontinued rules that unnecessarily required carriers to maintain costly older, lower-speed networks and legacy voice services. It streamlined the discontinuance process for lower-speed services if the carrier is already providing higher-speed service or if no customers are using the service. It also okayed streamlined notice following force majeure events like hurricanes to encourage investment and deployment of newer, faster infrastructure and services following destruction of networks. The FCC also approved requests by companies to provide high-speed broadband through non-geostationary orbit satellite constellations and created a streamlined licensing process for small satellites to encourage faster deployment.

This is what happens when you get a tech nerd at the head of an agency he loves and cares for. A serious commitment to good policy with an eye toward the future.

Restoring Internet Freedom

This is a pretty sensitive one for me. You hear less about it now, other than some murmurs from the Biden administration about changing it, but the debate over net neutrality got nasty and apocalyptic.

It was everywhere; people saying Chairman Pai would end the internet as we know it. The whole web blacked out for a day in protest. People mocked up memes showing a 25 cent-per-Google-search charge. And as a result of this over-the-top rhetoric, my friend, and his family, received death threats.

That is truly beyond the pale. One could not blame anyone for leaving public service in such an environment. I cannot begin to imagine what I would have done in Ajit’s place. But Ajit took the threats on his life with grace and dignity, never lost his sense of humor, and continued to serve the public dutifully with remarkable courage. I think that says a lot about him. And the American public is lucky to have benefited from his leadership.

Now, for the policy stuff. Though it should go without saying, the light-touch framework Chairman Pai returned us to—as opposed to the public utility one—will ensure that the United States maintains its leading position on technological innovation in 5G networks and services. The fact that we have endured COVID—and the massive strain on the internet it has caused—with little to no noticeable impact on internet services is all the evidence you need he made the right choice. Ajit has rightfully earned the title of the “5G Chairman.”

Conclusion

I cannot give Ajit all the praise he truly deserves without sounding sycophantic, or bribed. There are any number of windows into his character, but one rises above the rest for me. And I wanted to take the extra time to thank Ajit for it.

Every year, without question, no matter what was going on—even as chairman—Ajit would come to my classes and talk to my students. At length. In detail. And about any subject they wished. He stayed until he answered all of their questions. If I didn’t politely shove him out of the class to let him go do his real job, I’m sure he would have stayed until the last student left. And if you know anything about how to judge a person’s character, that will tell you all you need to know. 

Congratulations, Chairman Pai.


[1] Jerry Ellig & Catherine Konieczny, The Organization of Economists in Regulatory Agencies: Does Structure Matter?

[2] Rural Digital Opportunity Fund, Fed. Commc’ns Comm’n, https://www.fcc.gov/auction/904.

[3] Press Release, Connect America Fund Auction to Expand Broadband to Over 700,000 Rural Homes and Businesses: Auction Allocates $1.488 Billion to Close the Digital Divide, Fed. Commc’ns Comm’n, https://docs.fcc.gov/public/attachments/DOC-353840A1.pdf.

[4] Press Release, FCC Provides Relief for Carriers Serving Tribal Lands, Fed. Commc’ns Comm’n, https://www.fcc.gov/document/fcc-provides-relief-carriers-serving-tribal-lands.

[5] Press Release, FCC Approves $950 Million to Harden, Improve, and Expand Broadband Networks in Puerto Rico and U.S. Virgin Islands, Fed. Commc’ns Comm’n, https://docs.fcc.gov/public/attachments/DOC-359891A1.pdf.

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Tim Brennan, (Professor, Economics & Public Policy, University of Maryland; former FCC; former FTC).]

Thinking about how to think about the coronavirus situation I keep coming back to three economic ideas that seem distinct but end up being related. First, a back of the envelope calculation suggests shutting down the economy for a while to reduce the spread of Covid-19. This leads to my second point, that political viability, if not simple fairness, dictates that the winners compensate the losers. The extent of both of these forces my main point, to understand why we can’t just “get the prices right” and let the market take care of it. Insisting that the market works in this situation could undercut the very strong arguments for why we should defer to markets in the vast majority of circumstances.

Is taking action worth it?

The first question is whether shutting down the economy to reduce the spread of Covid-19 is a good bet. Being an economist, I turn to benefit-cost analysis (BCA). All I can offer here is a back-of-the-envelope calculation, which may be an insult to envelopes. (This paper has a more serious calculation with qualitatively similar findings.) With all caveats recognized, the willingness to pay of an average person in the US to social distancing and closure policies, WTP, is

        WTP = X% times Y% times VSL,

where X% is the fraction of the population that might be seriously affected, Y% is the reduction in the likelihood of death for this population from these policies, and VSL is the “value of statistical life” used in BCA calculations, in the ballpark of $9.5M.

For X%, take the percentage of the population over 65 (a demographic including me). This is around 16%. I’m not an epidemiologist, so for Y%, the reduced likelihood of death (either from reduced transmission or reduced hospital overload), I can only speculate. Say it’s 1%, which naively seems pretty small. Even with that, the average willingness to pay would be

        WTP = 16% times 1% times $9.5M = $15,200.

Multiply that by a US population of roughly 330M gives a total national WTP of just over $5 trillion, or about 23% of GDP. Using conventional measures, this looks like a good trade in an aggregate benefit-cost sense, even leaving out willingness to pay to reduce the likelihood of feeling sick and the benefits to those younger than 65. Of course, among the caveats is not just whether to impose distancing and closures, but how long to have them (number of weeks), how severe they should be (gathering size limits, coverage of commercial establishments), and where they should be imposed (closing schools, colleges).  

Actual, not just hypothetical, compensation

The justification for using BCA is that the winners could compensate the losers. In the coronavirus setting, the equity considerations are profound. Especially when I remember that GDP is not a measure of consumer surplus, I ask myself how many months of the disruption (and not just lost wages) from unemployment should low-income waiters, cab drivers, hotel cleaners, and the like bear to reduce my over-65 likelihood of dying. 

Consequently, an important component of this policy to respect equity and quite possibly obtaining public acceptance is that the losers be compensated. In that respect, the justification for packages such as the proposal working (as I write) through Congress is not stimulus—after all, it’s  harder to spend money these days—as much as compensating those who’ve lost jobs as a result of this policy. Stimulus can come when the economy is ready to be jump-started.

Markets don’t always work, perhaps like now 

This brings me to a final point—why is this a public policy matter? My answer to almost any policy question is the glib “just get the prices right and the market will take care of it.” That doesn’t seem all that popular now. Part of that is the politics of fairness: Should the wealthy get the ventilators? Should hoarding of hand sanitizer be rewarded? But much of it may be a useful reminder that markets do not work seamlessly and instantaneously, and may not be the best allocation mechanism in critical times.

That markets are not always best should be a familiar theme to TOTM readers. The cost of using markets is the centerpiece for Ronald Coase’s 1937 Nature of the Firm and 1960 Problem of Social Cost justification for allocation through the courts. Many of us, including me on TOTM, have invoked these arguments to argue against public interventions in the structure of firms, particularly antitrust actions regarding vertical integration. Another common theme is that the common law tends toward efficiency because of the market-like evolutionary processes in property, tort, and contract case law.

This perspective is a useful reminder that the benefits of markets should always be “compared to what?” In one familiar case, the benefits of markets are clear when compared to the snail’s pace, limited information, and political manipulability of administrative price setting. But when one is talking about national emergencies and the inelastic demands, distributional consequences, and the lack of time for the price mechanism to work its wonders, one can understand and justify the use of the plethora of mandates currently imposed or contemplated. 

The common law also appears not to be a good alternative. One can imagine the litigation nightmare if everyone who got the virus attempted to identify and sue some defendant for damages. A similar nightmare awaits if courts were tasked with determning how the risk of a pandemic would have been allocated were contracts ideal.

Much of this may be belaboring the obvious. My concern is that if those of us who appreciate the virtues of markets exaggerate their applicability, those skeptical of markets may use this episode to say that markets inherently fail and more of the economy should be publicly administered. Better to rely on facts rather than ideology, and to regard the current situation as the awful but justifiable exception that proves the general rule.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

The FTC will hold an “Informational Injury Workshop” in December “to examine consumer injury in the context of privacy and data security.” Defining the scope of cognizable harm that may result from the unauthorized use or third-party hacking of consumer information is, to be sure, a crucial inquiry, particularly as ever-more information is stored digitally. But the Commission — rightly — is aiming at more than mere definition. As it notes, the ultimate objective of the workshop is to address questions like:

How do businesses evaluate the benefits, costs, and risks of collecting and using information in light of potential injuries? How do they make tradeoffs? How do they assess the risks of different kinds of data breach? What market and legal incentives do they face, and how do these incentives affect their decisions?

How do consumers perceive and evaluate the benefits, costs, and risks of sharing information in light of potential injuries? What obstacles do they face in conducting such an evaluation? How do they evaluate tradeoffs?

Understanding how businesses and consumers assess the risk and cost “when information about [consumers] is misused,” and how they conform their conduct to that risk, entails understanding not only the scope of the potential harm, but also the extent to which conduct affects the risk of harm. This, in turn, requires an understanding of the FTC’s approach to evaluating liability under Section 5 of the FTC Act.

The problem, as we discuss in comments submitted by the International Center for Law & Economics to the FTC for the workshop, is that the Commission’s current approach troublingly mixes the required separate analyses of risk and harm, with little elucidation of either.

The core of the problem arises from the Commission’s reliance on what it calls a “reasonableness” standard for its evaluation of data security. By its nature, a standard that assigns liability for only unreasonable conduct should incorporate concepts resembling those of a common law negligence analysis — e.g., establishing a standard of due care, determining causation, evaluating the costs of and benefits of conduct that would mitigate the risk of harm, etc. Unfortunately, the Commission’s approach to reasonableness diverges from the rigor of a negligence analysis. In fact, as it has developed, it operates more like a strict liability regime in which largely inscrutable prosecutorial discretion determines which conduct, which firms, and which outcomes will give rise to liability.

Most troublingly, coupled with the Commission’s untenably lax (read: virtually nonexistent) evidentiary standards, the extremely liberal notion of causation embodied in its “reasonableness” approach means that the mere storage of personal information, even absent any data breach, could amount to an unfair practice under the Act — clearly not a “reasonable” result.

The notion that a breach itself can constitute injury will, we hope, be taken up during the workshop. But even if injury is limited to a particular type of breach — say, one in which sensitive, personal information is exposed to a wide swath of people — unless the Commission’s definition of what it means for conduct to be “likely to cause” harm is fixed, it will virtually always be the case that storage of personal information could conceivably lead to the kind of breach that constitutes injury. In other words, better defining the scope of injury does little to cabin the scope of the agency’s discretion when conduct creating any risk of that injury is actionable.

Our comments elaborate on these issues, as well as providing our thoughts on how the subjective nature of informational injuries can fit into Section 5, with a particular focus on the problem of assessing informational injury given evolving social context, and the need for appropriately assessing benefits in any cost-benefit analysis of conduct leading to informational injury.

ICLE’s full comments are available here.

The comments draw upon our article, When ‘Reasonable’ Isn’t: The FTC’s Standard-Less Data Security Standard, forthcoming in the Journal of Law, Economics and Policy.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Last week the International Center for Law & Economics and I filed an amicus brief in the DC Circuit in support of en banc review of the court’s decision to uphold the FCC’s 2015 Open Internet Order.

In our previous amicus brief before the panel that initially reviewed the OIO, we argued, among other things, that

In order to justify its Order, the Commission makes questionable use of important facts. For instance, the Order’s ban on paid prioritization ignores and mischaracterizes relevant record evidence and relies on irrelevant evidence. The Order also omits any substantial consideration of costs. The apparent necessity of the Commission’s aggressive treatment of the Order’s factual basis demonstrates the lengths to which the Commission must go in its attempt to fit the Order within its statutory authority.

Our brief supporting en banc review builds on these points to argue that

By reflexively affording substantial deference to the FCC in affirming the Open Internet Order (“OIO”), the panel majority’s opinion is in tension with recent Supreme Court precedent….

The panel majority need not have, and arguably should not have, afforded the FCC the level of deference that it did. The Supreme Court’s decisions in State Farm, Fox, and Encino all require a more thorough vetting of the reasons underlying an agency change in policy than is otherwise required under the familiar Chevron framework. Similarly, Brown and Williamson, Utility Air Regulatory Group, and King all indicate circumstances in which an agency construction of an otherwise ambiguous statute is not due deference, including when the agency interpretation is a departure from longstanding agency understandings of a statute or when the agency is not acting in an expert capacity (e.g., its decision is based on changing policy preferences, not changing factual or technical considerations).

In effect, the panel majority based its decision whether to afford the FCC deference upon deference to the agency’s poorly supported assertions that it was due deference. We argue that this is wholly inappropriate in light of recent Supreme Court cases.

Moreover,

The panel majority failed to appreciate the importance of granting Chevron deference to the FCC. That importance is most clearly seen at an aggregate level. In a large-scale study of every Court of Appeals decision between 2003 and 2013, Professors Kent Barnett and Christopher Walker found that a court’s decision to defer to agency action is uniquely determinative in cases where, as here, an agency is changing established policy.

Kent Barnett & Christopher J. Walker, Chevron In the Circuit Courts 61, Figure 14 (2016), available at ssrn.com/abstract=2808848.

Figure 14 from Barnett & Walker, as reproduced in our brief.

As  that study demonstrates,

agency decisions to change established policy tend to present serious, systematic defects — and [thus that] it is incumbent upon this court to review the panel majority’s decision to reflexively grant Chevron deference. Further, the data underscore the importance of the Supreme Court’s command in Fox and Encino that agencies show good reason for a change in policy; its recognition in Brown & Williamson and UARG that departures from existing policy may fall outside of the Chevron regime; and its command in King that policies not made by agencies acting in their capacity as technical experts may fall outside of the Chevron regime. In such cases, the Court essentially holds that reflexive application of Chevron deference may not be appropriate because these circumstances may tend toward agency action that is arbitrary, capricious, in excess of statutory authority, or otherwise not in accordance with law.

As we conclude:

The present case is a clear example where greater scrutiny of an agency’s decision-making process is both warranted and necessary. The panel majority all too readily afforded the FCC great deference, despite the clear and unaddressed evidence of serious flaws in the agency’s decision-making process. As we argued in our brief before the panel, and as Judge Williams recognized in his partial dissent, the OIO was based on factually inaccurate, contradicted, and irrelevant record evidence.

Read our full — and very short — amicus brief here.