Archives For regulatory reform

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Geoffrey A. Manne is the president and founder of the International Center for Law and Economics.]

I’m delighted to add my comments to the chorus of voices honoring Ajit Pai’s remarkable tenure at the Federal Communications Commission. I’ve known Ajit longer than most. We were classmates in law school … let’s just say “many” years ago. Among the other symposium contributors I know of only one—fellow classmate, Tom Nachbar—who can make a similar claim. I wish I could say this gives me special insight into his motivations, his actions, and the significance of his accomplishments, but really it means only that I have endured his dad jokes and interminable pop-culture references longer than most. 

But I can say this: Ajit has always stood out as a genuinely humble, unfailingly gregarious, relentlessly curious, and remarkably intelligent human being, and he deployed these characteristics to great success at the FCC.   

Ajit’s tenure at the FCC was marked by an abiding appreciation for the importance of competition, both as a guiding principle for new regulations and as a touchstone to determine when to challenge existing ones. As others have noted (and as we have written elsewhere), that approach was reflected significantly in the commission’s Restoring Internet Freedom Order, which made competition—and competition enforcement by the antitrust agencies—the centerpiece of the agency’s approach to net neutrality. But I would argue that perhaps Chairman Pai’s greatest contribution to bringing competition to the forefront of the FCC’s mandate came in his work on media modernization.

Fairly early in his tenure at the commission, Ajit raised concerns with the FCC’s failure to modernize its media-ownership rules. In response to the FCC’s belated effort to initiate the required 2010 and 2014 Quadrennial Reviews of those rules, then-Commissioner Pai noted that the commission had abdicated its responsibility under the statute to promote competition. Not only was the FCC proposing to maintain a host of outdated existing rules, but it was also moving to impose further constraints (through new limitations on the use of Joint Sales Agreements (JSAs)). As Ajit noted, such an approach was antithetical to competition:

In smaller markets, the choice is not between two stations entering into a JSA and those same two stations flourishing while operating completely independently. Rather, the choice is between two stations entering into a JSA and at least one of those stations’ viability being threatened. If stations in these smaller markets are to survive and provide many of the same services as television stations in larger markets, they must cut costs. And JSAs are a vital mechanism for doing that.

The efficiencies created by JSAs are not a luxury in today’s digital age. They are necessary, as local broadcasters face fierce competition for viewers and advertisers.

Under then-Chairman Tom Wheeler, the commission voted to adopt the Quadrennial Review in 2016, issuing rules that largely maintained the status quo and, at best, paid tepid lip service to the massive changes in the competitive landscape. As Ajit wrote in dissent:

The changes to the media marketplace since the FCC adopted the Newspaper-Broadcast Cross-Ownership Rule in 1975 have been revolutionary…. Yet, instead of repealing the Newspaper-Broadcast Cross-Ownership Rule to account for the massive changes in how Americans receive news and information, we cling to it.

And over the near-decade since the FCC last finished a “quadrennial” review, the video marketplace has transformed dramatically…. Yet, instead of loosening the Local Television Ownership Rule to account for the increasing competition to broadcast television stations, we actually tighten that regulation.

And instead of updating the Local Radio Ownership Rule, the Radio-Television Cross-Ownership Rule, and the Dual Network Rule, we merely rubber-stamp them.

The more the media marketplace changes, the more the FCC’s media regulations stay the same.

As Ajit also accurately noted at the time:

Soon, I expect outside parties to deliver us to the denouement: a decisive round of judicial review. I hope that the court that reviews this sad and total abdication of the administrative function finds, once and for all, that our media ownership rules can no longer stay stuck in the 1970s consistent with the Administrative Procedure Act, the Communications Act, and common sense. The regulations discussed above are as timely as “rabbit ears,” and it’s about time they go the way of those relics of the broadcast world. I am hopeful that the intervention of the judicial branch will bring us into the digital age.

And, indeed, just this week the case was argued before the Supreme Court.

In the interim, however, Ajit became Chairman of the FCC. And in his first year in that capacity, he took up a reconsideration of the 2016 Order. This 2017 Order on Reconsideration is the one that finally came before the Supreme Court. 

Consistent with his unwavering commitment to promote media competition—and no longer a minority commissioner shouting into the wind—Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers:

Today we end the 2010/2014 Quadrennial Review proceeding. In doing so, the Commission not only acknowledges the dynamic nature of the media marketplace, but takes concrete steps to update its broadcast ownership rules to reflect reality…. In this Order on Reconsideration, we refuse to ignore the changed landscape and the mandates of Section 202(h), and we deliver on the Commission’s promise to adopt broadcast ownership rules that reflect the present, not the past. Because of our actions today to relax and eliminate outdated rules, broadcasters and local newspapers will at last be given a greater opportunity to compete and thrive in the vibrant and fast-changing media marketplace. And in the end, it is consumers that will benefit, as broadcast stations and newspapers—those media outlets most committed to serving their local communities—will be better able to invest in local news and public interest programming and improve their overall service to those communities.

Ajit’s approach was certainly deregulatory. But more importantly, it was realistic, well-reasoned, and responsive to changing economic circumstances. Unlike most of his predecessors, Ajit was unwilling to accede to the torpor of repeated judicial remands (on dubious legal grounds, as we noted in our amicus brief urging the Court to grant certiorari in the case), permitting facially and wildly outdated rules to persist in the face of massive and obvious economic change. 

Like Ajit, I am not one to advocate regulatory action lightly, especially in the (all-too-rare) face of judicial review that suggests an agency has exceeded its discretion. But in this case, the need for dramatic rule change—here, to deregulate—was undeniable. The only abuse of discretion was on the part of the court, not the agency. As we put it in our amicus brief:

[T]he panel vacated these vital reforms based on mere speculation that they would hinder minority and female ownership, rather than grounding its action on any record evidence of such an effect. In fact, the 2017 Reconsideration Order makes clear that the FCC found no evidence in the record supporting the court’s speculative concern.

…In rejecting the FCC’s stated reasons for repealing or modifying the rules, absent any evidence in the record to the contrary, the panel substituted its own speculative concerns for the judgment of the FCC, notwithstanding the FCC’s decades of experience regulating the broadcast and newspaper industries. By so doing, the panel exceeded the bounds of its judicial review powers under the APA.

Key to Ajit’s conclusion that competition in local media markets could be furthered by permitting more concentration was his awareness that the relevant market for analysis couldn’t be limited to traditional media outlets like broadcasters and newspapers; it must include the likes of cable networks, streaming video providers, and social-media platforms, as well. As Ajit put it in a recent speech:

The problem is a fundamental refusal to grapple with today’s marketplace: what the service market is, who the competitors are, and the like. When assessing competition, some in Washington are so obsessed with the numerator, so to speak—the size of a particular company, for instance—that they’ve completely ignored the explosion of the denominator—the full range of alternatives in media today, many of which didn’t exist a few years ago.

When determining a particular company’s market share, a candid assessment of the denominator should include far more than just broadcast networks or cable channels. From any perspective (economic, legal, or policy), it should include any kinds of media consumption that consumers consider to be substitutes. That could be TV. It could be radio. It could be cable. It could be streaming. It could be social media. It could be gaming. It could be still something else. The touchstone of that denominator should be “what content do people choose today?”, not “what content did people choose in 1975 or 1992, and how can we artificially constrict our inquiry today to match that?”

For some reason, this simple and seemingly undeniable conception of the market escapes virtually all critics of Ajit’s media-modernization agenda. Indeed, even Justice Stephen Breyer in this week’s oral argument seemed baffled by the notion that more concentration could entail more competition:

JUSTICE BREYER: I’m thinking of it solely as a — the anti-merger part, in — in anti-merger law, merger law generally, I think, has a theory, and the theory is, beyond a certain point and other things being equal, you have fewer companies in a market, the harder it is to enter, and it’s particularly harder for smaller firms. And, here, smaller firms are heavily correlated or more likely to be correlated with women and minorities. All right?

The opposite view, which is what the FCC has now chosen, is — is they want to move or allow to be moved towards more concentration. So what’s the theory that that wouldn’t hurt the minorities and women or smaller businesses? What’s the theory the opposite way, in other words? I’m not asking for data. I’m asking for a theory.

Of course, as Justice Breyer should surely know—and as I know Ajit Pai knows—counting the number of firms in a market is a horrible way to determine its competitiveness. In this case, the competition from internet media platforms, particularly for advertising dollars, is immense. A regulatory regime that prohibits traditional local-media outlets from forging efficient joint ventures or from obtaining the scale necessary to compete with those platforms does not further competition. Even if such a rule might temporarily result in more media outlets, eventually it would result in no media outlets, other than the large online platforms. The basic theory behind the Reconsideration Order—to answer Justice Breyer—is that outdated government regulation imposes artificial constraints on the ability of local media to adopt the organizational structures necessary to compete. Removing those constraints may not prove a magic bullet that saves local broadcasters and newspapers, but allowing the rules to remain absolutely ensures their demise. 

Ajit’s commitment to furthering competition in telecommunications markets remained steadfast throughout his tenure at the FCC. From opposing restrictive revisions to the agency’s spectrum screen to dissenting from the effort to impose a poorly conceived and retrograde regulatory regime on set-top boxes, to challenging the agency’s abuse of its merger review authority to impose ultra vires regulations, to, of course, rolling back his predecessor’s unsupportable Title II approach to net neutrality—and on virtually every issue in between—Ajit sought at every turn to create a regulatory backdrop conducive to competition.

Tom Wheeler, Pai’s predecessor at the FCC, claimed that his personal mantra was “competition, competition, competition.” His greatest legacy, in that regard, was in turning over the agency to Ajit.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Jerry Ellig was a research professor at The George Washington University Regulatory Studies Center and served as chief economist at the Federal Communications Commission from 2017 to 2018. Tragically, he passed away Jan. 20, 2021. TOTM is honored to publish his contribution to this symposium.]

One significant aspect of Chairman Ajit Pai’s legacy is not a policy change, but an organizational one: establishment of the Federal Communications Commission’s (FCC’s) Office of Economics and Analytics (OEA) in 2018.

Prior to OEA, most of the FCC’s economists were assigned to the various policy bureaus, such as Wireless, Wireline Competition, Public Safety, Media, and International. Each of these bureaus had its own chief economist, but the rank-and-file economists reported to the managers who ran the bureaus – usually attorneys who also developed policy and wrote regulations. In the words of former FCC Chief Economist Thomas Hazlett, the FCC had “no location anywhere in the organizational structure devoted primarily to economic analysis.”

Establishment of OEA involved four significant changes. First, most of the FCC’s economists (along with data strategists and auction specialists) are now grouped together into an organization separate from the policy bureaus, and they are managed by other economists. Second, the FCC rules establishing the new office tasked OEA with reviewing every rulemaking, reviewing every other item with economic content that comes before the commission for a vote, and preparing a full benefit-cost analysis for any regulation with $100 million or more in annual economic impact. Third, a joint memo from the FCC’s Office of General Counsel and OEA specifies that economists are to be involved in the early stages of all rulemakings. Fourth, the memo also indicates that FCC regulatory analysis should follow the principles articulated in Executive Order 12866 and Office of Management and Budget Circular A-4 (while specifying that the FCC, as an independent agency, is not bound by the executive order).

While this structure for managing economists was new for the FCC, it is hardly uncommon in federal regulatory agencies. Numerous independent agencies that deal with economic regulation house their economists in a separate bureau or office, including the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Surface Transportation Board, the Office of Comptroller of the Currency, and the Federal Trade Commission. The SEC displays even more parallels with the FCC. A guidance memo adopted in 2012 by the SEC’s Office of General Counsel and Division of Risk, Strategy and Financial Innovation (the name of the division where economists and other analysts were located) specifies that economists are to be involved in the early stages of all rulemakings and articulates best analytical practices based on Executive Order 12866 and Circular A-4.

A separate economics office offers several advantages over the FCC’s prior approach. It gives the economists greater freedom to offer frank advice, enables them to conduct higher-quality analysis more consistent with the norms of their profession, and may ultimately make it easier to uphold FCC rules that are challenged in court.

Independence.  When I served as chief economist at the FCC in 2017-2018, I gathered from conversations that the most common practice in the past was for attorneys who wrote rules to turn to economists for supporting analysis after key decisions had already been made. This was not always the process, but it often occurred. The internal working group of senior FCC career staff who drafted the plan for OEA reached similar conclusions. After the establishment of OEA, an FCC economist I interviewed noted how his role had changed: “My job used to be to support the policy decisions made in the chairman’s office. Now I’m much freer to speak my own mind.”

Ensuring economists’ independence is not a problem unique to the FCC. In a 2017 study, Stuart Shapiro found that most of the high-level economists he interviewed who worked on regulatory impact analyses in federal agencies perceive that economists can be more objective if they are located outside the program office that develops the regulations they are analyzing. As one put it, “It’s very difficult to conduct a BCA [benefit-cost analysis] if our boss wrote what you are analyzing.” Interviews with senior economists and non-economists who work on regulation that I conducted for an Administrative Conference of the United States project in 2019 revealed similar conclusions across federal agencies. Economists located in organizations separate from the program office said that structure gave them greater independence and ability to develop better analytical methodologies. On the other hand, economists located in program offices said they experienced or knew of instances where they were pressured or told to produce an analysis with the results decision-makers wanted.

The FTC provides an informative case study. From 1955-1961, many of the FTC’s economists reported to the attorneys who conducted antitrust cases; in 1961, they were moved into a separate Bureau of Economics. Fritz Mueller, the FTC chief economist responsible for moving the antitrust economists back into the Bureau of Economics, noted that they were originally placed under the antitrust attorneys because the attorneys wanted more control over the economic analysis. A 2015 evaluation by the FTC’s Inspector General concluded that the Bureau of Economics’ existence as a separate organization improves its ability to offer “unbiased and sound economic analysis to support decision-making.”

Higher-quality analysis. An issue closely related to economists’ independence is the quality of the economic analysis. Executive branch regulatory economists interviewed by Richard Williams expressed concern that the economic analysis was more likely to be changed to support decisions when the economists are located in the program office that writes the regulations. More generally, a study that Catherine Konieczny and I conducted while we were at the FCC found that executive branch agencies are more likely to produce higher-quality regulatory impact analyses if the economists responsible for the analysis are in an independent economics office rather than the program office.

Upholding regulations in court. In Michigan v. EPA, the Supreme Court held that it is unreasonable for agencies to refuse to consider regulatory costs if the authorizing statute does not prohibit them from doing so. This precedent will likely increase judicial expectations that agencies will consider economic issues when they issue regulations. The FCC’s OGC-OEA memo cites examples of cases where the quality of the FCC’s economic analysis either helped or harmed the commission’s ability to survive legal challenge under the Administrative Procedure Act’s “arbitrary and capricious” standard. More systematically, a recent Regulatory Studies Center working paper finds that a higher-quality economic analysis accompanying a regulation reduces the likelihood that courts will strike down the regulation, provided that the agency explains how it used the analysis in decisions.

Two potential disadvantages of a separate economics office are that it may make the economists easier to ignore (what former FCC Chief Economist Tim Brennan calls the “Siberia effect”) and may lead the economists to produce research that is less relevant to the practical policy concerns of the policymaking bureaus. The FCC’s reorganization plan took these disadvantages seriously.

To ensure that the ultimate decision-makers—the commissioners—have access to the economists’ analysis and recommendations, the rules establishing the office give OEA explicit responsibility for reviewing all items with economic content that come before the commission. Each item is accompanied by a cover memo that indicates whether OEA believes there are any significant issues, and whether they have been dealt with adequately. To ensure that economists and policy bureaus work together from the outset of regulatory initiatives, the OGC-OEA memo instructs:

Bureaus and Offices should, to the extent practicable, coordinate with OEA in the early stages of all Commission-level and major Bureau-level proceedings that are likely to draw scrutiny due to their economic impact. Such coordination will help promote productive communication and avoid delays from the need to incorporate additional analysis or other content late in the drafting process. In the earliest stages of the rulemaking process, economists and related staff will work with programmatic staff to help frame key questions, which may include drafting options memos with the lead Bureau or Office.

While presiding over his final commission meeting on Jan. 13, Pai commented, “It’s second nature now for all of us to ask, ‘What do the economists think?’” The real test of this institutional innovation will be whether that practice continues under a new chair in the next administration.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Daniel Lyons is a professor of law at Boston College Law School and a visiting fellow at the American Enterprise Institute.]

For many, the chairmanship of Ajit Pai is notable for its many headline-grabbing substantive achievements, including the Restoring Internet Freedom order, 5G deployment, and rural buildout—many of which have been or will be discussed in this symposium. But that conversation is incomplete without also acknowledging Pai’s careful attention to the basic blocking and tackling of running a telecom agency. The last four years at the Federal Communications Commission were marked by small but significant improvements in how the commission functions, and few are more important than the chairman’s commitment to transparency.

Draft Orders: The Dark Ages Before 2017

This commitment is most notable in Pai’s revisions to the open meeting process. From time immemorial, the FCC chairman would set the agenda for the agency’s monthly meeting by circulating draft orders to the other commissioners three weeks in advance. But the public was deliberately excluded from that distribution list. During this period, the commissioners would read proposals, negotiate revisions behind the scenes, then meet publicly to vote on final agency action. But only after the meeting—often several days later—would the actual text of the order be made public.

The opacity of this process had several adverse consequences. Most obviously, the public lacked details about the substance of the commission’s deliberations. The Government in the Sunshine Act requires the agency’s meetings to be made public so the American people know what their government is doing. But without the text of the orders under consideration, the public had only a superficial understanding of what was happening each month. The process was reminiscent of House Speaker Nancy Pelosi’s famous gaffe that Congress needed to “pass the [Affordable Care Act] bill so that you can find out what’s in it.” During the high-profile deliberations over the Open Internet Order in 2015, then-Commissioner Pai made significant hay over this secrecy, repeatedly posting pictures of himself with the 300-plus-page order on Twitter with captions such as “I wish the public could see what’s inside” and “the public still can’t see it.”

Other consequences were less apparent, but more detrimental. Because the public lacked detail about key initiatives, the telecom media cycle could be manipulated by strategic leaks designed to shape the final vote. As then-Commissioner Pai testified to Congress in 2016:

[T]he public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Sometimes, this secrecy backfired on the chairman, such as when net-neutrality advocates used media pressure to shape the 2014 Open Internet NPRM. Then-Chairman Tom Wheeler’s proposed order sought to follow the roadmap laid out by the D.C. Circuit’s Verizon decision, which relied on Title I to prevent ISPs from blocking content or acting in a “commercially unreasonable manner.” Proponents of a more aggressive Title II approach leaked these details to the media in a negative light, prompting tech journalists and advocates to unleash a wave of criticism alleging the chairman was “killing off net neutrality to…let the big broadband providers double charge.” In full damage control mode, Wheeler attempted to “set the record straight” about “a great deal of misinformation that has recently surfaced regarding” the draft order. But the tempest created by these leaks continued, pressuring Wheeler into adding a Title II option to the NPRM—which, of course, became the basis of the 2015 final rule.

This secrecy also harmed agency bipartisanship, as minority commissioners sometimes felt as much in the dark as the general public. As Wheeler scrambled to address Title II advocates’ concerns, he reportedly shared revised drafts with fellow Democrats but did not circulate the final draft to Republicans until less than 48 hours before the vote—leading Pai to remark cheekily that “when it comes to the Chairman’s latest net neutrality proposal, the Democratic Commissioners are in the fast lane and the Republican Commissioners apparently are being throttled.” Similarly, Pai complained during the 2014 spectrum screen proceeding that “I was not provided a final version of the item until 11:50 p.m. the night before the vote and it was a substantially different document with substantively revised reasoning than the one that was previously circulated.”

Letting the Sunshine In

Eliminating this culture of secrecy was one of Pai’s first decisions as chairman. Less than a month after assuming the reins at the agency, he announced that the FCC would publish all draft items at the same time they are circulated to commissioners, typically three weeks before each monthly meeting. While this move was largely applauded, some were concerned that this transparency would hamper the agency’s operations. One critic suggested that pre-meeting publication would hamper negotiations among commissioners: “Usually, drafts created negotiating room…Now the chairman’s negotiating position looks like a final position, which undercuts negotiating ability.” Another, while supportive of the change, was concerned that the need to put a draft order in final form well before a meeting might add “a month or more to the FCC’s rulemaking adoption process.”

Fortunately, these concerns proved to be unfounded. The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan—compared to 33% and 69.9%, respectively, under Chairman Wheeler. 

This increased transparency also improved the overall quality of the agency’s work product. In a 2018 speech before the Free State Foundation, Commissioner Mike O’Rielly explained that “drafts are now more complete and more polished prior to the public reveal, so edits prior to the meeting are coming from Commissioners, as opposed to there being last minute changes—or rewrites—from staff or the Office of General Counsel.” Publishing draft orders in advance allows the public to flag potential issues for revision before the meeting, which improves the quality of the final draft and reduces the risk of successful post-meeting challenges via motions for reconsideration or petitions for judicial review. O’Rielly went on to note that the agency seemed to be running more efficiently as well, as “[m]eetings are targeted to specific issues, unnecessary discussions of non-existent issues have been eliminated, [and] conversations are more productive.”

Other Reforms

While pre-meeting publication was the most visible improvement to agency transparency, there are other initiatives also worth mentioning.

  • Limiting Editorial Privileges: Chairman Pai dramatically limited “editorial privileges,” a longtime tradition that allowed agency staff to make changes to an order’s text even after the final vote. Under Pai, editorial privileges were limited to technical and conforming edits only; substantive changes were not permitted unless they were proposed directly by a commissioner and only in response to new arguments offered by a dissenting commissioner. This reduces the likelihood of a significant change being introduced outside the public eye.
  • Fact Sheet: Adopting a suggestion of Commissioner Mignon Clyburn, Pai made it a practice to preface each published draft order with a one-page fact sheet that summarized the item in lay terms, as much as possible. This made the agency’s monthly work more accessible and transparent to members of the public who lacked the time to wade through the full text of each draft order.
  • Online Transparency Dashboard: Pai also launched an online dashboard on the agency’s website. This dashboard offers metrics on the number of items currently pending at the commission by category, as well as quarterly trends over time.
  • Restricting Comment on Upcoming Items: As a gesture of respect to fellow commissioners, Pai committed that the chairman’s office would not brief the press or members of the public, or publish a blog, about an upcoming matter before it was shared with other commissioners. This was another step toward reducing the strategic use of leaks or selective access to guide the tech media news cycle.

And while it’s technically not a transparency reform, Pai also deserves credit for his willingness to engage the public as the face of the agency. He was the first FCC commissioner to join Twitter, and throughout his chairmanship he maintained an active social media presence that helped personalize the agency and make it more accessible. His commitment to this channel is all the more impressive when one considers the way some opponents used these platforms to hurl a steady stream of hateful, often violent and racist invective at him during his tenure.

Pai deserves tremendous credit for spearheading these efforts to bring the agency out of the shadows and into the sunlight. Of course, he was not working alone. Pai shares credit with other commissioners and staff who supported transparency and worked to bring these policies to fruition, most notably former Commissioner O’Rielly, who beat a steady drum for process reform throughout his tenure.

We do not yet know who President Joe Biden will appoint as Pai’s successor. It is fair to assume that whomever is chosen will seek to put his or her own stamp on the agency. But let’s hope that enhanced transparency and the other process reforms enacted over the past four years remain a staple of agency practice moving forward. They may not be flashy, but they may prove to be the most significant and long-lasting impact of the Pai chairmanship.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Brent Skorup is a senior research fellow at the Mercatus Center at George Mason University.]

Ajit Pai came into the Federal Communications Commission chairmanship with a single priority: to improve the coverage, cost, and competitiveness of U.S. broadband for the benefit of consumers. The 5G Fast Plan, the formation of the Broadband Deployment Advisory Committee, the large spectrum auctions, and other broadband infrastructure initiatives over the past four years have resulted in accelerated buildouts and higher-quality services. Millions more Americans have gotten connected because of agency action and industry investment.

That brings us to Chairman Pai’s most important action: restoring the deregulatory stance of the FCC toward broadband services and repealing the Title II “net neutrality” rules in 2018. Had he not done this, his and future FCCs would have been bogged down in inscrutable, never-ending net neutrality debates, reminiscent of the Fairness Doctrine disputes that consumed the agency 50 years ago. By doing that, he cleared the decks for the pro-deployment policies that followed and redirected the agency away from its roots in mass-media policy toward a future where the agency’s primary responsibilities are encouraging broadband deployment and adoption.

It took tremendous courage from Chairman Pai and Commissioners Michael O’Rielly and Brendan Carr to vote to repeal the 2015 Title II regulations, though they probably weren’t prepared for the public reaction to a seemingly arcane dispute over regulatory classification. The hysteria ginned up by net-neutrality advocates, members of Congress, celebrities, and too-credulous journalists was unlike anything I’ve seen in political advocacy. Advocates, of course, don’t intend to provoke disturbed individuals but the irresponsible predictions of “the end of the internet as we know it” and widespread internet service provider (ISP) content blocking drove one man to call in a bomb threat to the FCC, clearing the building in a desperate attempt to delay or derail the FCC’s Title II repeal. At least two other men pleaded guilty to federal charges after issuing vicious death threats to Chairman Pai, a New York congressman, and their families in the run-up to the regulation’s repeal. No public official should have to face anything resembling that over a policy dispute.

For all the furor, net-neutrality advocates promised a neutral internet that never was and never will be. ”Happy little bunny rabbit dreams” is how David Clark of MIT, an early chief protocol architect of the internet, derided the idea of treating all online traffic the same. Relatedly, the no-blocking rule—the sine na qua of net neutrality—was always a legally dubious requirement. Legal scholars for years had called into doubt the constitutionality of imposing must-carry requirements on ISPs. Unsurprisingly, a federal appellate judge pressed this point in oral arguments defending the net neutrality rules in 2016. The Obama FCC attorney conceded without a fight; even after the net neutrality order, ISPs were “absolutely” free to curate the internet.

Chairman Pai recognized that the fight wasn’t about website blocking and it wasn’t, strictly speaking, about net neutrality. This was the latest front in the long battle over whether the FCC should strictly regulate mass-media distribution. There is a long tradition of progressive distrust of new (unregulated) media. The media access movement that pushed for broadcast TV and radio and cable regulations from the 1960s to 1980s never went away, but the terminology has changed: disinformation, net neutrality, hate speech, gatekeeper.

The decline in power of regulated media—broadcast radio and TV—and the rising power of unregulated internet-based media—social media, Netflix, and podcasts—meant that the FCC and Congress had few ways to shape American news and media consumption. In the words of Tim Wu, the law professor who coined the term “net neutrality,” the internet rules are about giving the agency the continuing ability to shape “media policy, social policy, oversight of the political process, [and] issues of free speech.”

Title II was the only tool available to bring this powerful new media—broadband access—under intense regulatory scrutiny by regulators and the political class. As net-neutrality advocate and Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition. Net neutrality would draw the agency into contentious mass-media regulation once again, distracting it from universal service efforts, spectrum access and auctions, and cleaning up the regulatory detritus that had slowly accumulated since the passage of the agency’s guiding statutes: the 1934 Communications Act and the 1996 Telecommunications Act.

There are probably items that Chairman Pai wish he’d finished or had done slightly differently. He’s left a proud legacy, however, and his politically risky decision to repeal the Title II rules redirected agency energies away from no-win net-neutrality battles and toward broadband deployment and infrastructure. Great progress was made and one hopes the Biden FCC chairperson will continue that trajectory that Pai set.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Mark Jamison is the Gerald L. Gunter Memorial Professor and director of the Public Utility Research Center at the University of Florida’s Warrington College of Business. He’s also a visiting scholar at the American Enterprise Institute.]

Chairman Ajit Pai will be remembered as one of the most consequential Federal Communications Commission chairmen in history. His policy accomplishments are numerous, including the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls.

Less will be said about the important work he has done rebuilding the FCC’s independence. It is rare for a new FCC chairman to devote resources to building the institution. Most focus on their policy agendas, because policies and regulations make up their legacies that the media notices, and because time and resources are limited. Chairman Pai did what few have even attempted to do: both build the organization and make significant regulatory reforms.

Independence is the ability of a regulatory institution to operate at arm’s length from the special interests of industry, politicians, and the like. The pressures to bias actions to benefit favored stakeholders can be tremendous; the FCC greatly influences who gets how much of the billions of dollars that are at stake in FCC decisions. But resisting those pressures is critical because investment and services suffer when a weak FCC is directed by political winds or industry pressures rather than law and hard analysis.

Chairman Pai inherited a politicized FCC. Research by Scott Wallsten showed that commission votes had been unusually partisan under the previous chairman (November 2013 through January 2017). From the beginning of Reed Hundt’s term as chairman until November 2013, only 4% of commission votes had divided along party lines. By contrast, 26% of votes divided along party lines from November 2013 until Chairman Pai took over. This division was also reflected in a sharp decline in unanimous votes under the previous administration. Only 47% of FCC votes on orders were unanimous, as opposed to an average of 60% from Hundt through the brief term of Mignon Clyburn.

Chairman Pai and his fellow commissioners worked to heal this divide. According to the FCC’s data, under Chairman Pai, over 80% of items on the monthly meeting agenda had bipartisan support and over 70% were adopted without dissent. This was hard, as Democrats in general were deeply against President Donald Trump and some members of Congress found a divided FCC convenient.

The political orientation of the FCC prior to Chairman Pai was made clear in the management of controversial issues. The agency’s work on net neutrality in 2015 pivoted strongly toward heavy regulation when President Barack Obama released his video supporting Title II regulation of the internet. And there is evidence that the net-neutrality decision was made in the White House, not at the FCC. Agency economists were cut out of internal discussions once the political decision had been made to side with the president, causing the FCC’s chief economist to quip that the decision was an economics-free zone.

On other issues, a vote on Lifeline was delayed several hours so that people on Capitol Hill could lobby a Democratic commissioner to align with fellow Democrats and against the Republican commissioners. And an initiative to regulate set-top boxes was buoyed, not by analyses by FCC staff, but by faulty data and analyses from Democratic senators.

Chairman Pai recognized the danger of politically driven decision-making and noted that it was enabled in part by the agency’s lack of a champion for economic analyses. To remedy this situation, Chairman Pai proposed forming an Office of Economics and Analytics (OEA). The commission adopted his proposal, but unfortunately it was with one of the rare party-line votes. Hopefully, Democratic commissioners have learned the value of the OEA.

The OEA has several responsibilities, but those most closely aligned with supporting the agency’s independence are that it: (a) provides economic analysis, including cost-benefit analysis, for commission actions; (b) develops policies and strategies on data resources and best practices for data use; and (c) conducts long-term research. The work of the OEA makes it hard for a politically driven chairman to pretend that his or her initiatives are somehow substantive.

Another institutional weakness at the FCC was a lack of transparency. Prior to Chairman Pai, the public was not allowed to view the text of commission decisions until after they were adopted. Even worse, sometimes the text that the commissioners saw when voting was not the text in the final decision. Wallsten described in his research a situation where the meaning of a vote actually changed from the time of the vote to the release of the text:

On February 9, 2011 the Federal Communications Commission (FCC) released a proposed rule that included, among many other provisions, capping the Universal Service Fund at $4.5 billion. The FCC voted to approve a final order on October 27, 2011. But when the order was finally released on November 18, 2011, the $4.5 billion ceiling had effectively become a floor, with the order requiring the agency to forever estimate demand at no less than $4.5 billion. Because payments from the fund had been decreasing steadily, this floor means that the FCC is now collecting hundreds of billions of dollars more in taxes than it is spending on the program. [footnotes omitted]

The lack of transparency led many to not trust the FCC and encouraged stakeholders with inside access to bypass the legitimate public process for lobbying the agency. This would have encouraged corruption had not Chairman Pai changed the system. He required that decision texts be released to the public at the same time they were released to commissioners. This allows the public to see what the commissioners are voting on. And it ensures that orders do not change after they are voted on.

The FCC demonstrated its independence under Chairman Pai. In the case of net neutrality, the three Republican commissioners withstood personal threats, mocking from congressional Democrats, and pressure from Big Tech to restore light-handed regulation. About a year later, Chairman Pai was strongly criticized by President Trump for rejecting the Sinclair-Tribune merger. And despite the president’s support of the merger, he apparently had sufficient respect for the FCC’s independence that the White House never contacted the FCC about the issue. In the case of Ligado Networks’ use of its radio spectrum license, the FCC stood up to intense pressure from the U.S. Department of Defense and from members of Congress who wanted to substitute their technical judgement for the FCC’s research on the impacts of Ligado’s proposal.

It is possible that a new FCC could undo this new independence. Commissioners could marginalize their economists, take their directions from partisans, and reintroduce the practice of hiding information from the public. But Chairman Pai foresaw this and carefully made his changes part of the institutional structure of the FCC, making any steps backward visible to all concerned.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Julian Morris, (Director of Innovation Policy, ICLE).]

SARS-CoV2, the virus that causes COVID-19, is now widespread in the population in many countries, including the US, UK, Australia, Iran, and many European countries. Its prevalence in other regions, such as South Asia, much of South America, and Africa, is relatively unknown. The failure to contain the virus early on has meant that more aggressive measures are now necessary in order to avoid overwhelming healthcare systems, which would cause unacceptable levels of mortality. (Sadly, Italy’s health system has already been overwhelmed, forcing medical practitioners to engage in the most awful triage decisions.) Many jurisdictions, ranging from cities to entire countries, have chosen to implement mandatory lockdowns. These will likely have the desired effect of slowing transmission in the short term, but they cannot be maintained indefinitely. The challenge going forward is how to contain the spread of the virus without destroying the economy. 

In this post I will outline the elements of a proposal that I hope might do that. (I’ve been working on this for about a week and in the meantime some of the ideas have been advanced by others. E.g. this and this. Great minds clearly think alike.)

1. Identify those who have had COVID-19 and have recovered — and allow them to go back to work

While there are some reports of people who have had COVID-19 becoming reinfected, this seems to be very rare (a recent primate study implies reinfection is impossible) and the alleged cases may have been a result of false negative tests followed by relapse by patients. The general presumption is that having the disease is likely to confer immunity for several months at least. Moreover, people with immunity who no longer show symptoms of the disease are very unlikely to transmit the disease. Allowing those people to go back to work will lessen the burden of the lockdown without appreciably increasing the risk of infection

One group of such people is readily identifiable, though small: Those who tested positive for COVID-19 and subsequently recovered. Those people should be permitted to go back to work immediately.

2. Where possible, test, trace, treat, isolate

The town of Vo in Northern Italy, the site of the first death in the country from COVID-19, appears to have stopped the disease from spreading in about three weeks. It did so through a combination of universal testing, two weeks of strict lockdown, and quarantine of cases.  Could this be replicated elsewhere? 

Vo has a population of 3,300, so universal testing was not the gargantuan exercise it would be in, say, the continental US. Some larger jurisdictions have had similar success without resorting to universal testing and lockdown. South Korea managed to contain the spread of SARS-CoV2 relatively quickly through a combination of: social distancing (including closing schools and restricting large gatherings), testing anyone who had COVID-19 symptoms (and increasingly those without symptoms), tracing and testing of those who had contact with those symptomatic individuals, treating those with severe symptoms, quarantining those who tested positive but had no or only mild symptoms (the quarantine was monitored using a phone app and strictly enforced), and publicly sharing detailed information about the known incidence of the virus. 

A study of 181 cases in China published in the Annals of Internal Medicine found that the mean incubation period for COVID-19 is just over 5 days and only about 1 in 100 cases take longer than 14 days. By implication, if people have been strictly following the guidelines on avoiding contact with others, washing/sanitizing hands, sanitizing other objects, and avoiding hand-to-face contact, it should be possible, after two weeks of lockdown, to identify the vast majority of people who are not infected by testing everyone for the presence of SARS-CoV2 itself.

But that’s a series of big ifs. Since it takes a few days for the virus to replicate in the body to the point at which it is detectable, people who have recently been infected might test negative. Also, it is unlikely to be feasible logistically to test a significant proportion of the population for SARS-CoV2 in a short period of time. Existing tests require the use of RT-PCR, which is expensive and time consuming, not least because it can only be done at a lab, and while the capacity for such tests is increasing, it is likely around 50,000 per day in the entire US. 

Test, trace, treat, and isolate may be a feasible option for towns and even cities that currently have relatively low incidence of SARS-CoV2. However, given the lethargic progress of testing in places such as the US, UK and India, and hence poor existing knowledge of the extent of infection, it will not be a universal panacea.

3. Test as many people as possible for the presence of antibodies to SARS-CoV2

Outside those few places that have dramatically ramped up testing, it is likely that many more people have had COVID-19 than have been tested, either because they were asymptomatic or because they did not require clinical attention. Many, perhaps most of those people will no longer have the virus in their system but they should still have antibodies (indicating immunity). In order to identify those people, there should be widespread testing for antibodies to SARS-CoV2. 

Antibody tests are inexpensive, quick, and some can be done at home with minimal assistance. Numerous such tests have already been produced or are in development (see the list here). For example, Chinese manufacturer Innovita has produced a test that appears to be effective; in a clinical trial of 447 patients, it identified the presence of antibodies to SARS-CoV2 in 87.3 % of clinically confirmed cases of COVID-19 (i.e. there were approximately 13% false negatives) but zero false positives. Innovita’s test was approved by China’s equivalent of the FDA and has been used widely there. 

Scanwell Health, a San Francisco-based startup, has an exclusive license to produce Innovita’s test in the U.S. and has already begun the process for obtaining approval from the US FDA under its Emergency Use Authorization. Scanwell estimates that the total cost of the test, including overnight shipping of the kit and support from a doctor or nurse practitioner from Lemonaid Health, will be around $70. One downside to Scanwell Health’s offering, however, is that it expects it to take 6-8 weeks to begin shipping testing kits once it receives authorization from the FDA

So far, the FDA has approved at least one SARS-CoV2 antibody test, produced by Aytu Bioscience in Colorado. But Aytu’s test is designed for use by physicians, not at home. In Europe, at least one antibody test, produced by German company PharmactACT, is already available. (That test has similar characteristics to Innovita’s.) Another has been approved by the MHRA in the UK for physician use and is awaiting approval for home use; the UK government has ordered 3.5 million of these tests, with the aim of distributing 250,000 per day by the end of April. 

Unfortunately, some people who have antibodies to SARS-CoV2 will also still be infectious. However, because different antibodies develop at different times during the course of infection, it may be possible to distinguish those who are still infectious from those who are no longer infectious. Specifically, immunoglobulin (Ig) M is present in larger amounts while the viral load is still present, while IgG is present in larger amounts later on (see e.g. this and the figure below). So, by testing for the presence of both IgM and IgG it should be possible to identify a large proportion of those who have had COVID-19 but are no longer infectious. (The currently available antibody tests result in about 13 percent false negatives, making them inappropriate as a means of screening out those who do not have COVID-19. But they produce zero false positives, making them ideal for identifying those who definitely have or have had COVID-19). In essence, people whose IgG test is positive but IgM test is negative can then go back to work. In addition, people who have had COVID-19 symptoms, are now symptom-free, and test positive for antibodies, should be allowed to go back to work.

4. Test for SARS-Cov2 among those who test negative for antibodies — and ensure that everyone who tests positive remains in isolation

Those people who test negative for SARS-CoV2 using the quick antibody immunoassay, as well as those who are positive for both IgG and IgM (indicating that they may still be infectious) should then be tested for SARS-CoV2 using the RT-PCR test described above. And those who test negative for SARS-CoV2 should then be permitted to go back to work. But those who test positive should be required to remain in isolation— and seek treatment if necessary.

5. Repeat steps 3 and 4 until nobody tests positive for COVID-19

By repeating steps 3 and 4, it should be possible gradually to enable the vast majority of the population to return to work, and thence to a life of greater normalcy, within a matter of weeks.

6. Some (possibly rather large) caveats

All of this relies on: (a) the ability rapidly to expand testing and (b) widespread compliance with isolation requirements. Neither of these conditions is by any means guaranteed, not least because the rules effectively discriminate in favor of people who have had COVID-19, which may create a perverse incentive to violate not only the isolation requirements but all the recommended hygiene practices — and thereby intentionally become infected with SARS-CoV2 on the presumption that they will then be able to go back to work sooner than otherwise. So, before this is rolled out, it is important to ensure that there will be widespread testing for COVID-19 in a timeframe shorter than the likely total time for contracting and recovering from COVID-19.

In addition, if test results are to be used as a means of establishing a person’s ability to travel and work while others are still under lockdown, it is important that there  be a means of verifying the status of individuals. That might be possible through the use of an app, for example; such an app might also provide policymakers to make better resources allocation decisions too. 

Also, at-risk individuals should be strongly advised to remain in isolation until there is no further evidence of community transmission. 

7. The Mechanics of Testing

Given that there are not currently sufficient tests available for everyone to be tested in most locations, one obvious question is: who should be tested? As noted above, it makes sense initially to target those who have had COVID-19 symptoms and have recovered. Since only those people who have had such symptoms—and possibly their physician if they presented with their symptoms—will know who they are, this will rely largely on trust. (It’s possible that self-reporting apps could help.) 

But it may make sense initially to target tests more narrowly. The UK is initially targeting the antibody detection kits to healthcare and other key workers—people who are essential to the continued functioning of the country. That makes sense and could easily be applied in other places. 

Assuming that key workers can be supplied with antibody detection kits quickly, distribution should then be opened up more widely. No doubt insurance companies will be making decisions about the purchase of testing kits. Ideally, however, individuals should be able to buy kits such as Scanwell’s without going through a bureaucratic process, whether that be their insurance company or the NHS. And vendors should be free to price kits as they see fit, without worrying about the prospect of being subject to price caps such as those imposed by Medicaid or the VA, which have the perverse effect of incentivising vendors to increase the list price. Finally, in order to increase the supply of tests as rapidly as possible, regulatory agencies should be encouraged to issue emergency approvals as quickly as possible. Having more manufacturers with a diverse array of tests available will increase access to testing more quickly and likely lead to more accurate testing too. Agencies such as the FDA should see this as their absolute priority right now. If the Mayo clinic can compress 6 months’ product development into a month, the FDA can surely do its review far more quickly too. Lives—and the economy—depend upon it.

We don’t yet know how bad the coronavirus outbreak will be in America.  But we do know that the virus is likely to have a major impact on Americans’ access to medication.  Currently, 80% of the active ingredients found in the drugs Americans take are made in China, and the virus has disrupted China’s ability to manufacture and supply those ingredients.  Generic drugs, which comprise 90% of America’s drugs, are likely to be particularly impacted because most generics are made in India, and Indian drug makers rely heavily on Chinese-made ingredients.  Indeed, on Tuesday, March 3, India decided to restrict exports of 26 drugs and drug ingredients because of reductions in China’s supply.  This disruption to the generic supply chain could mean that millions of Americans will not get the drugs they need to stay alive and healthy.

Coronavirus-related shortages are only the latest in a series of problems recently afflicting the generic drug industry.  In the last few years, there have been many reports of safety issues affecting generic drug quality at both domestic and overseas manufacturing facilities.  Numerous studies have uncovered shady practices and quality defects, including generics contaminated with carcinogens, drugs in which the active ingredients were switched for ineffective or unsafe alternatives, and manufacturing facilities that falsify or destroy documents to conceal their misdeeds.

We’ve also been inundated with stories of generic drug makers hiking prices for their products.  Although, as a whole, generic drugs are much cheaper than innovative brand products, the prices for many generic drugs are on the increase.  For some generics – Martin Shkreli’s Daraprim, heart medication Digoxin, antibiotic Doxycycline, insulin, and many others – prices have increased by several hundred percent. It turns out that many of the price increases are the result of anticompetitive behavior in the generic market. For others, the price increases are due to the increasing difficulty of generic drug makers to earn profits selling low-priced drugs.

Even before the coronavirus outbreak, there were numerous instances of shortages for critical generic drugs.  These shortages often result from drug makers’ lack of incentive to manufacture low-priced drugs that don’t earn much profit. The shortages have been growing in frequency and duration in recent years.  As a result of the shortages, 90 percent of U.S. hospitals report having to find alternative drug therapies, costing patients and hospitals over $400 million last year.  In other unfortunate situations, reasonable alternatives simply are not available and patients suffer.

With generic drug makers’ growing list of problems, many policy makers have called for significant changes to America’s approach to the generic drug industry. Perhaps the FDA needs to increase its inspection of overseas facilities?  Perhaps the FTC and state and federal prosecutors should step up their investigations and enforcement actions against anticompetitive behavior in the industry? Perhaps FDA should do even more to promote generic competition by expediting generic approvals

While these actions and other proposals could certainly help, none are aimed at resolving more than one or two of the significant problems vexing the industry. Senator Elizabeth Warren has proposed a more substantial overhaul that would bring the U.S. government into the generic-drug-making business. Under Warren’s plan, the Department of Health and Human Services (HHS) would manufacture or contract for the manufacture of drugs to be sold at lower prices.  Nationalizing the generic drug industry in this way would make the inspection of manufacturing facilities much easier and could ideally eliminate drug shortages.  In January, California’s governor proposed a similar system under which the state would begin manufacturing or contracting to manufacture generic drugs.

However, critics of public manufacturing argue that manufacturing and distribution infrastructure would be extremely costly to set up, with taxpayers footing the bill.  And even after the initial set-up, market dynamics that affect costs, such as increasing raw material costs or supply chain disruptions, would also mean greater costs for taxpayers.  Moreover, by removing the profit incentive created under the Hatch-Waxman Act to develop and manufacture generic drugs, it’s not clear that governments could develop or manufacture a sufficient supply of generics (consider the difference in efficiency between the U.S. Postal Service and either UPS or FedEx).

Another approach might be to treat the generic drug industry as a regulated industry. This model has been applied to utilities in the past when unregulated private ownership of utility infrastructure could not provide sufficient supply to meet consumer need, address market failures, or prevent the abuse of monopoly power.  Similarly, consumers’ need for safe and affordable medicines, market failures inherent throughout the industry, and industry consolidation that could give rise to market power suggest the regulated model might work well for generic drugs.   

Under this approach, Hatch-Waxman incentives could remain in place, granting the first generic drug an exclusivity period during which it could earn significant profits for the generic drug maker.  But when the exclusivity period ends, an agency like HHS would assign manufacturing responsibility for a particular drug to a handful of generic drug makers wishing to market in the U.S.  These companies would be guaranteed a profit based on a set rate of return on the costs of high-quality domestic manufacturing.  In order to maintain their manufacturing rights, facilities would have to meet strict FDA guidelines to ensure high quality drugs. 

Like the Warren and California proposals, this approach would tackle several problems at once.  Prices would be kept under control and facilities would face frequent inspections to ensure quality.  A guaranteed profit would eliminate generic companies’ financial risk, reducing their incentive to use cheap (and often unsafe) drug ingredients or to engage in illegal anticompetitive behavior.  It would also encourage steady production to reduce instances of drug shortages.  Unlike the Warren and California proposals, this approach would build on the existing generic infrastructure so that taxpayers don’t have to foot the bill to set up public manufacturing.  It would also continue to incentivize the development of generic alternatives by maintaining the Hatch-Waxman exclusivity period, and it would motivate the manufacture of generic drugs by companies seeking a reliable rate of return.

Several issues would need to be worked out with a regulated generic industry approach to prevent manipulation of rates of return, regulatory capture, and political appointees without the incentives or knowledge to regulate the drug makers. However, the recurring crises affecting generic drugs indicate the industry is rife with market failures.  Perhaps only a radical new approach will achieve lasting and necessary change.

The terms of the United Kingdom’s (UK) exit from the European Union (EU) – “Brexit” – are of great significance not just to UK and EU citizens, but for those in the United States and around the world who value economic liberty (see my Heritage Foundation memorandum giving the reasons why, here).

If Brexit is to promote economic freedom and enhanced economic welfare, Brexit negotiations between the UK and the EU must not limit the ability of the United Kingdom to pursue (1) efficiency-enhancing regulatory reform and (2) trade liberalizing agreements with non-EU nations.  These points are expounded upon in a recent economic study (The Brexit Inflection Point) by the non-profit UK think tank the Legatum Institute, which has produced an impressive body of research on the benefits of Brexit, if implemented in a procompetitive, economically desirable fashion.  (As a matter of full disclosure, I am a member of Legatum’s “Special Trade Commission,” which “seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  Members of the Special Trade Commission are unpaid – they serve on a voluntary pro bono basis.)

Unfortunately, however, leading UK press commentators have urged the UK Government to accede to a full harmonization of UK domestic regulations and trade policy with the EU.  Such a deal would be disastrous.  It would prevent the UK from entering into mutually beneficial trade liberalization pacts with other nations or groups of nations (e.g., with the U.S. and with the members of the Transpacific Partnership (TPP) trade agreement), because such arrangements by necessity would lead to a divergence with EU trade strictures.  It would also preclude the UK from unilaterally reducing harmful regulatory burdens that are a byproduct of economically inefficient and excessive EU rules.  In short, it would be antithetical to economic freedom and economic welfare.

Notably, in a November 30 article (Six Impossible Notions About “Global Britain”), a well-known business journalist, Martin Wolf of the Financial Times, sharply criticized The Brexit Inflection Point’s recommendation that the UK should pursue trade and regulatory policies that would diverge from EU standards.  Notably, Wolf characterized as an “impossible thing” Legatum’s point that the UK should not “’allow itself to be bound by the EU’s negotiating mandate.’  We all now know this is infeasible.  The EU holds the cards and it knows it holds the cards. The Legatum authors still do not.”

Shanker Singham, Director of Economic Policy and Prosperity Studies at Legatum, brilliantly responded to Wolf’s critique in a December 4 article (published online by CAPX) entitled A Narrow-Minded Brexit Is Doomed to Fail.  Singham’s trenchant analysis merits being set forth in its entirety (by permission of the author):

“Last week, the Financial Times’s chief economics commentator, Martin Wolf, dedicated his column to criticising The Brexit Inflection Point, a report for the Legatum Institute in which Victoria Hewson, Radomir Tylecote and I discuss what would constitute a good end state for the UK as it seeks to exercise an independent trade and regulatory policy post Brexit, and how we get from here to there.

We write these reports to advance ideas that we think will help policymakers as they tackle the single biggest challenge this country has faced since the Second World War. We believe in a market place of ideas, and we welcome challenge. . . .

[W]e are thankful that Martin Wolf, an eminent economist, has chosen to engage with the substance of our arguments. However, his article misunderstands the nature of modern international trade negotiations, as well as the reality of the European Union’s regulatory system – and so his claim that, like the White Queen, we “believe in impossible things” simply doesn’t stack up.

Mr Wolf claims there are six impossible things that we argue. We will address his rebuttals in turn.

But first, in discussions about the UK’s trade policy, it is important to bear in mind that the British government is currently discussing the manner in which it will retake its independent WTO membership. This includes agricultural import quotas, and its WTO rectification processes with other WTO members.

If other countries believe that the UK will adopt the position of maintaining regulatory alignment with the EU, as advocated by Mr Wolf and others, the UK’s negotiating strategy would be substantially weaker. It would quite wrongly suggest that the UK will be unable to lower trade barriers and offer the kind of liberalisation that our trading partners seek and that would work best for the UK economy. This could negatively impact both the UK and the EU’s ongoing discussions in the WTO.

Has the EU’s trading system constrained growth in the World?

The first impossible thing Mr Wolf claims we argue is that the EU system of protectionism and harmonised regulation has constrained economic growth for Britain and the world. He is right to point out that the volume of world trade has increased, and the UK has, of course, experienced GDP growth while a member of the EU.

However, as our report points out, the EU’s prescriptive approach to regulation, especially in the recent past (for example, its approach on data protection, audio-visual regulation, the restrictive application of the precautionary principle, REACH chemicals regulation, and financial services regulations to name just a few) has led to an increase in anti-competitive regulation and market distortions that are wealth destructive.

As the OECD notes in various reports on regulatory reform, regulation can act as a behind-the-border barrier to trade and impede market openness for trade and investment. Inefficient regulation imposes unnecessary burdens on firms, increases barriers to entry, impacts on competition and incentives for innovation, and ultimately hurts productivity. The General Data Protection Regulation (GDPR) is an example of regulation that is disproportionate to its objectives; it is highly prescriptive and imposes substantial compliance costs for business that want to use data to innovate.

Rapid growth during the post-war period is in part thanks to the progressive elimination of border trade barriers. But, in terms of wealth creation, we are no longer growing at that rate. Since before the financial crisis, measures of actual wealth creation (not GDP which includes consumer and government spending) such as industrial output have stalled, and the number of behind-the-border regulatory barriers has been increasing.

The global trading system is in difficulty. The lack of negotiation of a global trade round since the Uruguay Round, the lack of serious services liberalisation in either the built-in agenda of the WTO or sectorally following on from the Basic Telecoms Agreement and its Reference Paper on Competition Safeguards in 1997 has led to an increase in behind-the-border barriers and anti-competitive distortions and regulation all over the world. This stasis in international trade negotiations is an important contributory factor to what many economists have talked about as a “new normal” of limited growth, and a global decline in innovation.

Meanwhile the EU has sought to force its regulatory system on the rest of the world (the GDPR is an example of this). If it succeeds, the result would be the kind of wealth destruction that pushes more people into poverty. It is against this backdrop that the UK is negotiating with both the EU and the rest of the world.

The question is whether an independent UK, the world’s sixth biggest economy and second biggest exporter of services, is able to contribute to improving the dynamics of the global economic architecture, which means further trade liberalisation. The EU is protectionist against outside countries, which is antithetical to the overall objectives of the WTO. This is true in agriculture and beyond. For example, the EU imposes tariffs on cars at four times the rate applied by the US, while another large auto manufacturing country, Japan, has unilaterally removed its auto tariffs.

In addition, the EU27 represents a declining share of UK exports, which is rather counter-intuitive for a Customs Union and single market. In 1999, the EU represented 55 per cent of UK exports, and by 2016, this was 43 per cent. That said, the EU will remain an important, albeit declining, market for the UK, which is why we advocate a comprehensive free trade agreement with it.

Can the UK secure meaningful regulatory recognition from the EU without being identical to it?

Second, Mr Wolf suggests that regulatory recognition between the UK and EU is possible only if there is harmonisation or identical regulation between the UK and EU.

This is at odds with WTO practice, stretching back to its rules on domestic laws and regulation as encapsulated in Article III of the GATT and Article VI of the GATS, and as expressed in the Technical Barriers to Trade (TBT) and Sanitary and Phytosanitary (SPS) agreements.

This is the critical issue. The direction of travel of international trade thinking is towards countries recognising each other’s regulatory systems if they achieve the same ultimate goal of regulation, even if the underlying regulation differs, and to regulate in ways that are least distortive to international trade and competition. There will be areas where this level of recognition will not be possible, in which case UK exports into the EU will of course have to satisfy the standards of the EU. But even here we can mitigate the trade costs to some extent by Mutual Recognition Agreements on conformity assessment and market surveillance.

Had the US taken the view that it would not receive regulatory recognition unless their regulatory systems were the same, the recent agreement on prudential measures in insurance and reinsurance services between the EU and US would not exist. In fact this point highlights the crucial issue which the UK must successfully negotiate, and one in which its interests are aligned with other countries and with the direction of travel of the WTO itself. The TBT and SPS agreements broadly provide that mutual recognition should not be denied where regulatory goals are aligned but technical regulation differs.

Global trade and regulatory policy increasingly looks for regulation that promotes competition. The EU is on a different track, as the GDPR demonstrates. This is the reason that both the Canada-EU agreement (CETA) and the EU offer in the Trade in Services agreement (TiSA) does not include new services. If GDPR were to become the global standard, trade in data would be severely constrained, slowing the development of big data solutions, the fourth industrial revolution, and new services trade generally.

As many firms recognise, this would be extremely damaging to global prosperity. In arguing that regulatory recognition is only available if the UK is fully harmonised with the EU, Mr Wolf may be in harmony with the EU approach to regulation. But that is exactly the approach that is damaging the global trading environment.

Can the UK exercise trade policy leadership?

Third, Mr Wolf suggests that other countries do not, and will not, look to the UK for trade leadership. He cites the US’s withdrawal from the trade negotiating space as an example. But surely the absence of the world’s biggest services exporter means that the world’s second biggest exporter of services will be expected to advocate for its own interests, and argue for greater services liberalisation.

Mr Wolf believes that the UK is a second-rank power in decline. We take a different view of the world’s sixth biggest economy, the financial capital of the world and the second biggest exporter of services. As former New Zealand High Commissioner, Sir Lockwood Smith, has said, the rest of the world does not see the UK as the UK too often seems to see itself.

The global companies that have their headquarters in the UK do not see things the same way as Mr Wolf. In fact, the lack of trade leadership since 1997 means that a country with significant services exports would be expected to show some leadership.

Mr Wolf’s point is that far from seeking to grandiosely lead global trade negotiations, the UK should stick to its current knitting, which consists of its WTO rectification, and includes the negotiation of its agricultural import quotas and production subsidies in agriculture. This is perhaps the most concerning part of his argument. Yes, the UK must rectify its tariff schedules, but for that process to be successful, especially on agricultural import quotas, it must be able to demonstrate to its partners that it will be able to grant further liberalisation in the near term future. If it can’t, then its trading partners will have no choice but to demand as much liberalisation as they can secure right now in the rectification process.

This will complicate that process, and cause damage to the UK as it takes up its independent WTO membership. Those WTO partners who see the UK as vulnerable on this point will no doubt see validation in Mr Wolf’s article and assume it means that no real liberalisation will be possible from the UK. The EU should note that complicating this process for the UK will not help the EU in its own WTO processes, where it is vulnerable.

Trade negotiations are dynamic not static and the UK must act quickly

Fourth, Mr Wolf suggests that the UK is not under time pressure to “escape from the EU”.  This statement does not account for how international trade negotiations work in practice. In order for countries to cooperate with the UK on its WTO rectification, and its TRQ negotiations, as well to seriously negotiate with it, they have to believe that the UK will have control over tariff schedules and regulatory autonomy from day one of Brexit (even if we may choose not to make changes to it for an implementation period).

If non-EU countries think that the UK will not be able to exercise its freedom for several years, they will simply demand their pound of flesh in the negotiations now, and get on with the rest of their trade policy agenda. Trade negotiations are not static. The US executive could lose trade-negotiating authority in the summer of next year if the NAFTA renegotiation is not going well. Other countries will seek to accede to the Trans Pacific Partnership (TPP). China is moving forward with its Regional Cooperation and Economic Partnership, which does not meaningfully touch on domestic regulatory barriers. Much as we might criticise Donald Trump, his administration has expressed strong political will for a UK-US agreement, and in that regard has broken with traditional US trade policy thinking. The UK has an opportunity to strike and must take it.

The UK should prevail on the EU to allow Customs Agencies to be inter-operable from day one

Fifth, with respect to the challenges raised on customs agencies working together, our report argued that UK customs and the customs agencies of the EU member states should discuss customs arrangements at a practical and technical level now. What stands in the way of this is the EU’s stubbornness. Customs agencies are in regular contact on a business-as-usual basis, so the inability of UK and member-state customs agencies to talk to each other about the critical issue of new arrangements would seem to border on negligence. Of course, the EU should allow member states to have these critical conversations now.  Given the importance of customs agencies interoperating smoothly from day one, the UK Government must press its case with the European Commission to allow such conversations to start happening as a matter of urgency.

Does the EU hold all the cards?

Sixth, Mr Wolf argues that the EU holds all the cards and knows it holds all the cards, and therefore disagrees with our claim that the the UK should “not allow itself to be bound by the EU’s negotiating mandate”. As with his other claims, Mr Wolf finds himself agreeing with the EU’s negotiators. But that does not make him right.

While absence of a trade deal will of course damage UK industries, the cost to EU industries is also very significant. Beef and dairy in Ireland, cars and dairy in Bavaria, cars in Catalonia, textiles and dairy in Northern Italy – all over Europe (and in politically sensitive areas), industries stands to lose billions of Euros and thousands of jobs. This is without considering the impact of no financial services deal, which would increase the cost of capital in the EU, aborting corporate transactions and raising the cost of the supply chain. The EU has chosen a mandate that risks neither party getting what it wants.

The notion that the EU is a masterful negotiator, while the UK’s negotiators are hopeless is not the global view of the EU and the UK. Far from it. The EU in international trade negotiations has a reputation for being slow moving, lacking in creative vision, and unable to conclude agreements. Indeed, others have generally gone to the UK when they have been met with intransigence in Brussels.

What do we do now?

Mr Wolf’s argument amounts to a claim that the UK is not capable of the kind of further and deeper liberalisation that its economy would suggest is both possible and highly desirable both for the UK and the rest of the world. According to Mr Wolf, the UK can only consign itself to a highly aligned regulatory orbit around the EU, unable to realise any other agreements, and unable to influence the regulatory system around which it revolves, even as that system becomes ever more prescriptive and anti-competitive. Such a position is at odds with the facts and would guarantee a poor result for the UK and also cause opportunities to be lost for the rest of the world.

In all of our [Legatum Brexit-related] papers, we have started from the assumption that the British people have voted to leave the EU, and the government is implementing that outcome. We have then sought to produce policy recommendations based on what would constitute a good outcome as a result of that decision. This can be achieved only if we maximise the opportunities and minimise the disruptions.

We all recognise that the UK has embarked on a very difficult process. But there is a difference between difficult and impossible. There is also a difference between tasks that must be done and take time, and genuine negotiation points. We welcome the debate that comes from constructive challenge of our proposals; and we ask in turn that those who criticise us suggest alternative plans that might achieve positive outcomes. We look forward to the opportunity of a broader debate so that collectively the country can find the best path forward.”

 

As Truth on the Market readers prepare to enjoy their Thanksgiving dinners, let me offer some (hopefully palatable) “food for thought” on a competition policy for the new Trump Administration.  In referring to competition policy, I refer not just to lawsuits directed against private anticompetitive conduct, but more broadly to efforts aimed at curbing government regulatory barriers that undermine the competitive process.

Public regulatory barriers are a huge problem.  Their costs have been highlighted by prestigious international research bodies such as the OECD and World Bank, and considered by the International Competition Network’s Advocacy Working Group.  Government-imposed restrictions on competition benefit powerful incumbents and stymie entry by innovative new competitors.  (One manifestation of this that is particularly harmful for American workers and denies job opportunities to millions of lower-income Americans is occupational licensing, whose increasing burdens are delineated in a substantial body of research – see, for example, a 2015 Obama Administration White House Report and a 2016 Heritage Foundation Commentary that explore the topic.)  Federal Trade Commission (FTC) and Justice Department (DOJ) antitrust officials should consider emphasizing “state action” lawsuits aimed at displacing entry barriers and other unwarranted competitive burdens imposed by self-interested state regulatory boards.  When the legal prerequisites for such enforcement actions are not met, the FTC and the DOJ should ramp up their “competition advocacy” efforts, with the aim of convincing state regulators to avoid adopting new restraints on competition – and, where feasible, eliminating or curbing existing restraints.

The FTC and DOJ also should be authorized by the White House to pursue advocacy initiatives whose goal is to dismantle or lessen the burden of excessive federal regulations (such advocacy played a role in furthering federal regulatory reform during the Ford and Carter Administrations).  To bolster those initiatives, the Trump Administration should consider establishing a high-level federal task force on procompetitive regulatory reform, in the spirit of previous reform initiatives.  The task force would report to the president and include senior level representatives from all federal agencies with regulatory responsibilities.  The task force could examine all major regulatory and statutory schemes overseen by Executive Branch and independent agencies, and develop a list of specific reforms designed to reduce federal regulatory impediments to robust competition.  Those reforms could be implemented through specific regulatory changes or legislative proposals, as the case might require.  The task force would have ample material to work with – for example, anticompetitive cartel-like output restrictions, such as those allowed under federal agricultural orders, are especially pernicious.  In addition to specific cartel-like programs, scores of regulatory regimes administered by individual federal agencies impose huge costs and merit particular attention, as documented in the Heritage Foundation’s annual “Red Tape Rising” reports that document the growing burden of federal regulation (see, for example, the 2016 edition of Red Tape Rising).

With respect to traditional antitrust enforcement, the Trump Administration should emphasize sound, empirically-based economic analysis in merger and non-merger enforcement.  They should also adopt a “decision-theoretic” approach to enforcement, to the greatest extent feasible.  Specifically, in developing their enforcement priorities, in considering case selection criteria, and in assessing possible new (or amended) antitrust guidelines, DOJ and FTC antitrust enforcers should recall that antitrust is, like all administrative systems, inevitably subject to error costs.  Accordingly, Trump Administration enforcers should be mindful of the outstanding insights provide by Judge (and Professor) Frank Easterbrook on the harm from false positives in enforcement (which are more easily corrected by market forces than false negatives), and by Justice (and Professor) Stephen Breyer on the value of bright line rules and safe harbors, supported by sound economic analysis.  As to specifics, the DOJ and FTC should issue clear statements of policy on the great respect that should be accorded the exercise of intellectual property rights, to correct Obama antitrust enforcers’ poor record on intellectual property protection (see, for example, here).  The DOJ and the FTC should also accord greater respect to the efficiencies associated with unilateral conduct by firms possessing market power, and should consider reissuing an updated and revised version of the 2008 DOJ Report on Single Firm Conduct).

With regard to international competition policy, procedural issues should be accorded high priority.  Full and fair consideration by enforcers of all relevant evidence (especially economic evidence) and the views of all concerned parties ensures that sound analysis is brought to bear in enforcement proceedings and, thus, that errors in antitrust enforcement are minimized.  Regrettably, a lack of due process in foreign antitrust enforcement has become a matter of growing concern to the United States, as foreign competition agencies proliferate and increasingly bring actions against American companies.  Thus, the Trump Administration should make due process problems in antitrust a major enforcement priority.  White House-level support (ensuring the backing of other key Executive Branch departments engaged in foreign economic policy) for this priority may be essential, in order to strengthen the U.S. Government’s hand in negotiations and consultations with foreign governments on process-related concerns.

Finally, other international competition policy matters also merit close scrutiny by the new Administration.  These include such issues as the inappropriate imposition of extraterritorial remedies on American companies by foreign competition agencies; the harmful impact of anticompetitive foreign regulations on American businesses; and inappropriate attacks on the legitimate exercise of intellectual property by American firms (in particular, American patent holders).  As in the case of process-related concerns, White House attention and broad U.S. Government involvement in dealing with these problems may be essential.

That’s all for now, folks.  May you all enjoy your turkey and have a blessed Thanksgiving with friends and family.

In the wake of the recent OIO decision, separation of powers issues should be at the forefront of everyone’s mind. In reaching its decision, the DC Circuit relied upon Chevron to justify its extreme deference to the FCC. The court held, for instance, that

Our job is to ensure that an agency has acted “within the limits of [Congress’s] delegation” of authority… and that its action is not “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”… Critically, we do not “inquire as to whether the agency’s decision is wise as a policy matter; indeed, we are forbidden from substituting our judgment for that of the agency.”… Nor do we inquire whether “some or many economists would disapprove of the [agency’s] approach” because “we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgment by an agency acting pursuant to congressionally delegated authority.

The DC Circuit’s decision takes a broad view of Chevron deference and, in so doing, ignores or dismisses some of the limits placed upon the doctrine by cases like Michigan v. EPA and UARG v. EPA (though Judge Williams does bring up UARG in dissent).

Whatever one thinks of the validity of the FCC’s approach to regulating the Internet, there is no question that it has, at best, a weak statutory foothold. Without prejudging the merits of the OIO, or the question of deference to agencies that find “[regulatory] elephants in [statutory] mouseholes,”  such broad claims of authority, based on such limited statutory language, should give one pause. That the court upheld the FCC’s interpretation of the Act without expressing reservations, suggesting any limits, or admitting of any concrete basis for challenging the agency’s authority beyond circular references to “abuse of discretion” is deeply troubling.

Separation of powers is a fundamental feature of our democracy, and one that has undoubtedly contributed to the longevity of our system of self-governance. Not least among the important features of separation of powers is the ability of courts to review the lawfulness of legislation and executive action.

The founders presciently realized the dangers of allowing one part of the government to centralize power in itself. In Federalist 47, James Madison observed that

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, selfappointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. (emphasis added)

The modern administrative apparatus has become the sort of governmental body that the founders feared and that we have somehow grown to accept. The FCC is not alone in this: any member of the alphabet soup that constitutes our administrative state, whether “independent” or otherwise, is typically vested with great, essentially unreviewable authority over the economy and our daily lives.

As Justice Thomas so aptly put it in his must-read concurrence in Michigan v. EPA:

Perhaps there is some unique historical justification for deferring to federal agencies, but these cases reveal how paltry an effort we have made to understand it or to confine ourselves to its boundaries. Although we hold today that EPA exceeded even the extremely permissive limits on agency power set by our precedents, we should be alarmed that it felt sufficiently emboldened by those precedents to make the bid for deference that it did here. As in other areas of our jurisprudence concerning administrative agencies, we seem to be straying further and further from the Constitution without so much as pausing to ask why. We should stop to consider that document before blithely giving the force of law to any other agency “interpretations” of federal statutes.

Administrative discretion is fantastic — until it isn’t. If your party is the one in power, unlimited discretion gives your side the ability to run down a wish list, checking off controversial items that could never make it past a deliberative body like Congress. That same discretion, however, becomes a nightmare under extreme deference as political opponents, newly in power, roll back preferred policies. In the end, regulation tends toward the extremes, on both sides, and ultimately consumers and companies pay the price in the form of excessive regulatory burdens and extreme uncertainty.

In theory, it is (or should be) left to the courts to rein in agency overreach. Unfortunately, courts have been relatively unwilling to push back on the administrative state, leaving the task up to Congress. And Congress, too, has, over the years, found too much it likes in agency power to seriously take on the structural problems that give agencies effectively free reign. At least, until recently.

In March of this year, Representative Ratcliffe (R-TX) proposed HR 4768: the Separation of Powers Restoration Act (“SOPRA”). Arguably this is first real effort to fix the underlying problem since the 1995 “Comprehensive Regulatory Reform Act” (although, it should be noted, SOPRA is far more targeted than was the CRRA). Under SOPRA, 5 U.S.C. § 706 — the enacted portion of the APA that deals with judicial review of agency actions —  would be amended to read as follows (with the new language highlighted):

(a) To the extent necessary to decision and when presented, the reviewing court shall determine the meaning or applicability of the terms of an agency action and decide de novo all relevant questions of law, including the interpretation of constitutional and statutory provisions, and rules made by agencies. Notwithstanding any other provision of law, this subsection shall apply in any action for judicial review of agency action authorized under any provision of law. No law may exempt any such civil action from the application of this section except by specific reference to this section.

These changes to the scope of review would operate as a much-needed check on the unlimited discretion that agencies currently enjoy. They give courts the ability to review “de novo all relevant questions of law,” which includes agencies’ interpretations of their own rules.

The status quo has created a negative feedback cycle. The Chevron doctrine, as it has played out, gives outsized incentives to both the federal agencies, as well as courts, to essentially disregard Congress’s intended meaning for particular statutes. Today an agency can write rules and make decisions safe in the knowledge that Chevron will likely insulate it from any truly serious probing by a district court with regards to how well the agency’s action actually matches up with congressional intent or with even rudimentary cost-benefit analysis.

Defenders of the administrative state may balk at changing this state of affairs, of course. But defending an institution that is almost entirely immune from judicial and legal review seems to be a particularly hard row to hoe.

Public Knowledge, for instance, claims that

Judicial deference to agency decision-making is critical in instances where Congress’ intent is unclear because it balances each branch of government’s appropriate role and acknowledges the realities of the modern regulatory state.

To quote Justice Scalia, an unfortunate champion of the Chevron doctrine, this is “pure applesauce.”

The very core of the problem that SOPRA addresses is that the administrative state is not a proper branch of government — it’s a shadow system of quasi-legislation and quasi-legal review. Congress can be chastened by popular vote. Judges who abuse discretion can be overturned (or impeached). The administrative agencies, on the other hand, are insulated through doctrines like Chevron and Auer, and their personnel subject more or less to the political whims of the executive branch.

Even agencies directly under the control of the executive branch  — let alone independent agencies — become petrified caricatures of their original design as layers of bureaucratic rule and custom accrue over years, eventually turning the organization into an entity that serves, more or less, to perpetuate its own existence.

Other supporters of the status quo actually identify the unreviewable see-saw of agency discretion as a feature, not a bug:

Even people who agree with the anti-government premises of the sponsors [of SOPRA] should recognize that a change in the APA standard of review is an inapt tool for advancing that agenda. It is shortsighted, because it ignores the fact that, over time, political administrations change. Sometimes the administration in office will generally be in favor of deregulation, and in these circumstances a more intrusive standard of judicial review would tend to undercut that administration’s policies just as surely as it may tend to undercut a more progressive administration’s policies when the latter holds power. The APA applies equally to affirmative regulation and to deregulation.

But presidential elections — far from justifying this extreme administrative deference — actually make the case for trimming the sails of the administrative state. Presidential elections have become an important part about how candidates will wield the immense regulatory power vested in the executive branch.

Thus, for example, as part of his presidential bid, Jeb Bush indicated he would use the EPA to roll back every policy that Obama had put into place. One of Donald Trump’s allies suggested that Trump “should turn off [CNN’s] FCC license” in order to punish the news agency. And VP hopeful Elizabeth Warren has suggested using the FDIC to limit the growth of financial institutions, and using the FCC and FTC to tilt the markets to make it easier for the small companies to get an advantage over the “big guys.”

Far from being neutral, technocratic administrators of complex social and economic matters, administrative agencies have become one more political weapon of majority parties as they make the case for how their candidates will use all the power at their disposal — and more — to work their will.

As Justice Thomas, again, noted in Michigan v. EPA:

In reality…, agencies “interpreting” ambiguous statutes typically are not engaged in acts of interpretation at all. Instead, as Chevron itself acknowledged, they are engaged in the “formulation of policy.” Statutory ambiguity thus becomes an implicit delegation of rulemaking authority, and that authority is used not to find the best meaning of the text, but to formulate legally binding rules to fill in gaps based on policy judgments made by the agency rather than Congress.

And this is just the thing: SOPRA would bring far-more-valuable predictability and longevity to our legal system by imposing a system of accountability on the agencies. Currently, commissions often believe they can act with impunity (until the next election at least), and even the intended constraints of the APA frequently won’t do much to tether their whims to statute or law if they’re intent on deviating. Having a known constraint (or, at least, a reliable process by which judicial constraint may be imposed) on their behavior will make them think twice about exactly how legally and economically sound proposed rules and other actions are.

The administrative state isn’t going away, even if SOPRA were passed; it will continue to be the source of the majority of the rules under which our economy operates. We have long believed that a benefit of our judicial system is its consistency and relative lack of politicization. If this is a benefit for interpreting laws when agencies aren’t involved, it should also be a benefit when they are involved. Particularly as more and more law emanates from agencies rather than Congress, the oversight of largely neutral judicial arbiters is an essential check on the administrative apparatus’ “accumulation of all powers.”

The interest of judges tends to include a respect for the development of precedent that yields consistent and transparent rules for all future litigants and, more broadly, for economic actors and consumers making decisions in the shadow of the law. This is markedly distinct from agencies which, more often than not, promote the particular, shifting, and often-narrow political sentiments of the day.

Whether a Republican- or a Democrat— appointed district judge reviews an agency action, that judge will be bound (more or less) by the precedent that came before, regardless of the judge’s individual political preferences. Contrast this with the FCC’s decision to reclassify broadband as a Title II service, for example, where previously it had been committed to the idea that broadband was an information service, subject to an entirely different — and far less onerous — regulatory regime.  Of course, the next FCC Chairman may feel differently, and nothing would stop another regulatory shift back to the pre-OIO status quo. Perhaps more troublingly, the enormous discretion afforded by courts under current standards of review would permit the agency to endlessly tweak its rules — forbearing from some regulations but not others, un-forbearing, re-interpreting, etc., with precious few judicial standards available to bring certainty to the rules or to ensure their fealty to the statute or the sound economics that is supposed to undergird administrative decisionmaking.

SOPRA, or a bill like it, would have required the Commission to actually be accountable for its historical regulations, and would force it to undergo at least rudimentary economic analysis to justify its actions. This form of accountability can only be to the good.

The genius of our system is its (potential) respect for the rule of law. This is an issue that both sides of the aisle should be able to get behind: minority status is always just one election cycle away. We should all hope to see SOPRA — or some bill like it — gain traction, rooted in long-overdue reflection on just how comfortable we are as a polity with a bureaucratic system increasingly driven by unaccountable discretion.

The costs imposed by government regulation are huge and growing.  The Heritage Foundation produces detailed annual reports cataloguing the rising burden of the American regulatory state, and the Competitive Enterprise Institute recently estimated that regulations impose a $1.88 trillion annual tax on the U.S. economy.  Yet the need to rein in the regulatory behemoth has attracted relatively little attention in the early stages of the 2016 U.S. presidential campaign.  That may be changing, however.

On September 23, former Florida Governor Jeb Bush authored a short Wall Street Journal op-ed that set forth his ideas for curbing the “regulation tax.”  Governor Bush’s op-ed focuses on a host of particulars – including, for example, repealing specific onerous Environmental Protection rules, repealing significant parts of the Dodd-Frank Act, repealing and replacing Obamacare, putting federal agencies on a “regulatory budget” (requiring a dollar of regulatory savings per each dollar of regulatory costs proposed), curbing frivolous regulatory litigation, streamlining regulatory approval processes, and a greater emphasis on private and state-driven solutions.  Logical extensions of these initiatives, such as supplemental executive orders putting more “teeth” into routine regulatory review and support for the REINS Act (which would require congressional approval of “major” regulations), readily suggest themselves.

Regulatory reform initiatives have a long history.  A particularly notable example is the Reagan Administration’s 1981 efforts to curb excessive regulation through the Task Force on Regulatory Relief, which was linked to systematic White House review (through the Office of Management and Budget) of significant proposed regulations – a process that continues to this day (albeit imperfectly, to say the least).  It is to be hoped that all other presidential candidates will also think about and prepare their own regulatory reform proposals.  This should not be deemed a partisan issue.  President Carter, after all, promoted regulatory reform and ushered in welfare-enhancing transportation deregulation, and President Clinton touted deregulation accomplished during the first term of his presidency.)

In short, done properly, reducing regulatory burdens should “supercharge” U.S. economic growth and enhance efficiency, without harming consumers or the environment – indeed, consumers and the environment should benefit long-term from smarter, streamlined, cost-beneficial regulation.