Archives For politics

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Daniel Lyons is a professor of law at Boston College Law School and a visiting fellow at the American Enterprise Institute.]

For many, the chairmanship of Ajit Pai is notable for its many headline-grabbing substantive achievements, including the Restoring Internet Freedom order, 5G deployment, and rural buildout—many of which have been or will be discussed in this symposium. But that conversation is incomplete without also acknowledging Pai’s careful attention to the basic blocking and tackling of running a telecom agency. The last four years at the Federal Communications Commission were marked by small but significant improvements in how the commission functions, and few are more important than the chairman’s commitment to transparency.

Draft Orders: The Dark Ages Before 2017

This commitment is most notable in Pai’s revisions to the open meeting process. From time immemorial, the FCC chairman would set the agenda for the agency’s monthly meeting by circulating draft orders to the other commissioners three weeks in advance. But the public was deliberately excluded from that distribution list. During this period, the commissioners would read proposals, negotiate revisions behind the scenes, then meet publicly to vote on final agency action. But only after the meeting—often several days later—would the actual text of the order be made public.

The opacity of this process had several adverse consequences. Most obviously, the public lacked details about the substance of the commission’s deliberations. The Government in the Sunshine Act requires the agency’s meetings to be made public so the American people know what their government is doing. But without the text of the orders under consideration, the public had only a superficial understanding of what was happening each month. The process was reminiscent of House Speaker Nancy Pelosi’s famous gaffe that Congress needed to “pass the [Affordable Care Act] bill so that you can find out what’s in it.” During the high-profile deliberations over the Open Internet Order in 2015, then-Commissioner Pai made significant hay over this secrecy, repeatedly posting pictures of himself with the 300-plus-page order on Twitter with captions such as “I wish the public could see what’s inside” and “the public still can’t see it.”

Other consequences were less apparent, but more detrimental. Because the public lacked detail about key initiatives, the telecom media cycle could be manipulated by strategic leaks designed to shape the final vote. As then-Commissioner Pai testified to Congress in 2016:

[T]he public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Sometimes, this secrecy backfired on the chairman, such as when net-neutrality advocates used media pressure to shape the 2014 Open Internet NPRM. Then-Chairman Tom Wheeler’s proposed order sought to follow the roadmap laid out by the D.C. Circuit’s Verizon decision, which relied on Title I to prevent ISPs from blocking content or acting in a “commercially unreasonable manner.” Proponents of a more aggressive Title II approach leaked these details to the media in a negative light, prompting tech journalists and advocates to unleash a wave of criticism alleging the chairman was “killing off net neutrality to…let the big broadband providers double charge.” In full damage control mode, Wheeler attempted to “set the record straight” about “a great deal of misinformation that has recently surfaced regarding” the draft order. But the tempest created by these leaks continued, pressuring Wheeler into adding a Title II option to the NPRM—which, of course, became the basis of the 2015 final rule.

This secrecy also harmed agency bipartisanship, as minority commissioners sometimes felt as much in the dark as the general public. As Wheeler scrambled to address Title II advocates’ concerns, he reportedly shared revised drafts with fellow Democrats but did not circulate the final draft to Republicans until less than 48 hours before the vote—leading Pai to remark cheekily that “when it comes to the Chairman’s latest net neutrality proposal, the Democratic Commissioners are in the fast lane and the Republican Commissioners apparently are being throttled.” Similarly, Pai complained during the 2014 spectrum screen proceeding that “I was not provided a final version of the item until 11:50 p.m. the night before the vote and it was a substantially different document with substantively revised reasoning than the one that was previously circulated.”

Letting the Sunshine In

Eliminating this culture of secrecy was one of Pai’s first decisions as chairman. Less than a month after assuming the reins at the agency, he announced that the FCC would publish all draft items at the same time they are circulated to commissioners, typically three weeks before each monthly meeting. While this move was largely applauded, some were concerned that this transparency would hamper the agency’s operations. One critic suggested that pre-meeting publication would hamper negotiations among commissioners: “Usually, drafts created negotiating room…Now the chairman’s negotiating position looks like a final position, which undercuts negotiating ability.” Another, while supportive of the change, was concerned that the need to put a draft order in final form well before a meeting might add “a month or more to the FCC’s rulemaking adoption process.”

Fortunately, these concerns proved to be unfounded. The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan—compared to 33% and 69.9%, respectively, under Chairman Wheeler. 

This increased transparency also improved the overall quality of the agency’s work product. In a 2018 speech before the Free State Foundation, Commissioner Mike O’Rielly explained that “drafts are now more complete and more polished prior to the public reveal, so edits prior to the meeting are coming from Commissioners, as opposed to there being last minute changes—or rewrites—from staff or the Office of General Counsel.” Publishing draft orders in advance allows the public to flag potential issues for revision before the meeting, which improves the quality of the final draft and reduces the risk of successful post-meeting challenges via motions for reconsideration or petitions for judicial review. O’Rielly went on to note that the agency seemed to be running more efficiently as well, as “[m]eetings are targeted to specific issues, unnecessary discussions of non-existent issues have been eliminated, [and] conversations are more productive.”

Other Reforms

While pre-meeting publication was the most visible improvement to agency transparency, there are other initiatives also worth mentioning.

  • Limiting Editorial Privileges: Chairman Pai dramatically limited “editorial privileges,” a longtime tradition that allowed agency staff to make changes to an order’s text even after the final vote. Under Pai, editorial privileges were limited to technical and conforming edits only; substantive changes were not permitted unless they were proposed directly by a commissioner and only in response to new arguments offered by a dissenting commissioner. This reduces the likelihood of a significant change being introduced outside the public eye.
  • Fact Sheet: Adopting a suggestion of Commissioner Mignon Clyburn, Pai made it a practice to preface each published draft order with a one-page fact sheet that summarized the item in lay terms, as much as possible. This made the agency’s monthly work more accessible and transparent to members of the public who lacked the time to wade through the full text of each draft order.
  • Online Transparency Dashboard: Pai also launched an online dashboard on the agency’s website. This dashboard offers metrics on the number of items currently pending at the commission by category, as well as quarterly trends over time.
  • Restricting Comment on Upcoming Items: As a gesture of respect to fellow commissioners, Pai committed that the chairman’s office would not brief the press or members of the public, or publish a blog, about an upcoming matter before it was shared with other commissioners. This was another step toward reducing the strategic use of leaks or selective access to guide the tech media news cycle.

And while it’s technically not a transparency reform, Pai also deserves credit for his willingness to engage the public as the face of the agency. He was the first FCC commissioner to join Twitter, and throughout his chairmanship he maintained an active social media presence that helped personalize the agency and make it more accessible. His commitment to this channel is all the more impressive when one considers the way some opponents used these platforms to hurl a steady stream of hateful, often violent and racist invective at him during his tenure.

Pai deserves tremendous credit for spearheading these efforts to bring the agency out of the shadows and into the sunlight. Of course, he was not working alone. Pai shares credit with other commissioners and staff who supported transparency and worked to bring these policies to fruition, most notably former Commissioner O’Rielly, who beat a steady drum for process reform throughout his tenure.

We do not yet know who President Joe Biden will appoint as Pai’s successor. It is fair to assume that whomever is chosen will seek to put his or her own stamp on the agency. But let’s hope that enhanced transparency and the other process reforms enacted over the past four years remain a staple of agency practice moving forward. They may not be flashy, but they may prove to be the most significant and long-lasting impact of the Pai chairmanship.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Brent Skorup is a senior research fellow at the Mercatus Center at George Mason University.]

Ajit Pai came into the Federal Communications Commission chairmanship with a single priority: to improve the coverage, cost, and competitiveness of U.S. broadband for the benefit of consumers. The 5G Fast Plan, the formation of the Broadband Deployment Advisory Committee, the large spectrum auctions, and other broadband infrastructure initiatives over the past four years have resulted in accelerated buildouts and higher-quality services. Millions more Americans have gotten connected because of agency action and industry investment.

That brings us to Chairman Pai’s most important action: restoring the deregulatory stance of the FCC toward broadband services and repealing the Title II “net neutrality” rules in 2018. Had he not done this, his and future FCCs would have been bogged down in inscrutable, never-ending net neutrality debates, reminiscent of the Fairness Doctrine disputes that consumed the agency 50 years ago. By doing that, he cleared the decks for the pro-deployment policies that followed and redirected the agency away from its roots in mass-media policy toward a future where the agency’s primary responsibilities are encouraging broadband deployment and adoption.

It took tremendous courage from Chairman Pai and Commissioners Michael O’Rielly and Brendan Carr to vote to repeal the 2015 Title II regulations, though they probably weren’t prepared for the public reaction to a seemingly arcane dispute over regulatory classification. The hysteria ginned up by net-neutrality advocates, members of Congress, celebrities, and too-credulous journalists was unlike anything I’ve seen in political advocacy. Advocates, of course, don’t intend to provoke disturbed individuals but the irresponsible predictions of “the end of the internet as we know it” and widespread internet service provider (ISP) content blocking drove one man to call in a bomb threat to the FCC, clearing the building in a desperate attempt to delay or derail the FCC’s Title II repeal. At least two other men pleaded guilty to federal charges after issuing vicious death threats to Chairman Pai, a New York congressman, and their families in the run-up to the regulation’s repeal. No public official should have to face anything resembling that over a policy dispute.

For all the furor, net-neutrality advocates promised a neutral internet that never was and never will be. ”Happy little bunny rabbit dreams” is how David Clark of MIT, an early chief protocol architect of the internet, derided the idea of treating all online traffic the same. Relatedly, the no-blocking rule—the sine na qua of net neutrality—was always a legally dubious requirement. Legal scholars for years had called into doubt the constitutionality of imposing must-carry requirements on ISPs. Unsurprisingly, a federal appellate judge pressed this point in oral arguments defending the net neutrality rules in 2016. The Obama FCC attorney conceded without a fight; even after the net neutrality order, ISPs were “absolutely” free to curate the internet.

Chairman Pai recognized that the fight wasn’t about website blocking and it wasn’t, strictly speaking, about net neutrality. This was the latest front in the long battle over whether the FCC should strictly regulate mass-media distribution. There is a long tradition of progressive distrust of new (unregulated) media. The media access movement that pushed for broadcast TV and radio and cable regulations from the 1960s to 1980s never went away, but the terminology has changed: disinformation, net neutrality, hate speech, gatekeeper.

The decline in power of regulated media—broadcast radio and TV—and the rising power of unregulated internet-based media—social media, Netflix, and podcasts—meant that the FCC and Congress had few ways to shape American news and media consumption. In the words of Tim Wu, the law professor who coined the term “net neutrality,” the internet rules are about giving the agency the continuing ability to shape “media policy, social policy, oversight of the political process, [and] issues of free speech.”

Title II was the only tool available to bring this powerful new media—broadband access—under intense regulatory scrutiny by regulators and the political class. As net-neutrality advocate and Public Knowledge CEO Gene Kimmelman has said, the 2015 Order was about threatening the industry with vague but severe rules: “Legal risk and some ambiguity around what practices will be deemed ‘unreasonably discriminatory’ have been effective tools to instill fear for the last 20 years” for the telecom industry. Internet regulation advocates, he said at the time, “have to have fight after fight over every claim of discrimination, of new service or not.”

Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition. Net neutrality would draw the agency into contentious mass-media regulation once again, distracting it from universal service efforts, spectrum access and auctions, and cleaning up the regulatory detritus that had slowly accumulated since the passage of the agency’s guiding statutes: the 1934 Communications Act and the 1996 Telecommunications Act.

There are probably items that Chairman Pai wish he’d finished or had done slightly differently. He’s left a proud legacy, however, and his politically risky decision to repeal the Title II rules redirected agency energies away from no-win net-neutrality battles and toward broadband deployment and infrastructure. Great progress was made and one hopes the Biden FCC chairperson will continue that trajectory that Pai set.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Mark Jamison is the Gerald L. Gunter Memorial Professor and director of the Public Utility Research Center at the University of Florida’s Warrington College of Business. He’s also a visiting scholar at the American Enterprise Institute.]

Chairman Ajit Pai will be remembered as one of the most consequential Federal Communications Commission chairmen in history. His policy accomplishments are numerous, including the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls.

Less will be said about the important work he has done rebuilding the FCC’s independence. It is rare for a new FCC chairman to devote resources to building the institution. Most focus on their policy agendas, because policies and regulations make up their legacies that the media notices, and because time and resources are limited. Chairman Pai did what few have even attempted to do: both build the organization and make significant regulatory reforms.

Independence is the ability of a regulatory institution to operate at arm’s length from the special interests of industry, politicians, and the like. The pressures to bias actions to benefit favored stakeholders can be tremendous; the FCC greatly influences who gets how much of the billions of dollars that are at stake in FCC decisions. But resisting those pressures is critical because investment and services suffer when a weak FCC is directed by political winds or industry pressures rather than law and hard analysis.

Chairman Pai inherited a politicized FCC. Research by Scott Wallsten showed that commission votes had been unusually partisan under the previous chairman (November 2013 through January 2017). From the beginning of Reed Hundt’s term as chairman until November 2013, only 4% of commission votes had divided along party lines. By contrast, 26% of votes divided along party lines from November 2013 until Chairman Pai took over. This division was also reflected in a sharp decline in unanimous votes under the previous administration. Only 47% of FCC votes on orders were unanimous, as opposed to an average of 60% from Hundt through the brief term of Mignon Clyburn.

Chairman Pai and his fellow commissioners worked to heal this divide. According to the FCC’s data, under Chairman Pai, over 80% of items on the monthly meeting agenda had bipartisan support and over 70% were adopted without dissent. This was hard, as Democrats in general were deeply against President Donald Trump and some members of Congress found a divided FCC convenient.

The political orientation of the FCC prior to Chairman Pai was made clear in the management of controversial issues. The agency’s work on net neutrality in 2015 pivoted strongly toward heavy regulation when President Barack Obama released his video supporting Title II regulation of the internet. And there is evidence that the net-neutrality decision was made in the White House, not at the FCC. Agency economists were cut out of internal discussions once the political decision had been made to side with the president, causing the FCC’s chief economist to quip that the decision was an economics-free zone.

On other issues, a vote on Lifeline was delayed several hours so that people on Capitol Hill could lobby a Democratic commissioner to align with fellow Democrats and against the Republican commissioners. And an initiative to regulate set-top boxes was buoyed, not by analyses by FCC staff, but by faulty data and analyses from Democratic senators.

Chairman Pai recognized the danger of politically driven decision-making and noted that it was enabled in part by the agency’s lack of a champion for economic analyses. To remedy this situation, Chairman Pai proposed forming an Office of Economics and Analytics (OEA). The commission adopted his proposal, but unfortunately it was with one of the rare party-line votes. Hopefully, Democratic commissioners have learned the value of the OEA.

The OEA has several responsibilities, but those most closely aligned with supporting the agency’s independence are that it: (a) provides economic analysis, including cost-benefit analysis, for commission actions; (b) develops policies and strategies on data resources and best practices for data use; and (c) conducts long-term research. The work of the OEA makes it hard for a politically driven chairman to pretend that his or her initiatives are somehow substantive.

Another institutional weakness at the FCC was a lack of transparency. Prior to Chairman Pai, the public was not allowed to view the text of commission decisions until after they were adopted. Even worse, sometimes the text that the commissioners saw when voting was not the text in the final decision. Wallsten described in his research a situation where the meaning of a vote actually changed from the time of the vote to the release of the text:

On February 9, 2011 the Federal Communications Commission (FCC) released a proposed rule that included, among many other provisions, capping the Universal Service Fund at $4.5 billion. The FCC voted to approve a final order on October 27, 2011. But when the order was finally released on November 18, 2011, the $4.5 billion ceiling had effectively become a floor, with the order requiring the agency to forever estimate demand at no less than $4.5 billion. Because payments from the fund had been decreasing steadily, this floor means that the FCC is now collecting hundreds of billions of dollars more in taxes than it is spending on the program. [footnotes omitted]

The lack of transparency led many to not trust the FCC and encouraged stakeholders with inside access to bypass the legitimate public process for lobbying the agency. This would have encouraged corruption had not Chairman Pai changed the system. He required that decision texts be released to the public at the same time they were released to commissioners. This allows the public to see what the commissioners are voting on. And it ensures that orders do not change after they are voted on.

The FCC demonstrated its independence under Chairman Pai. In the case of net neutrality, the three Republican commissioners withstood personal threats, mocking from congressional Democrats, and pressure from Big Tech to restore light-handed regulation. About a year later, Chairman Pai was strongly criticized by President Trump for rejecting the Sinclair-Tribune merger. And despite the president’s support of the merger, he apparently had sufficient respect for the FCC’s independence that the White House never contacted the FCC about the issue. In the case of Ligado Networks’ use of its radio spectrum license, the FCC stood up to intense pressure from the U.S. Department of Defense and from members of Congress who wanted to substitute their technical judgement for the FCC’s research on the impacts of Ligado’s proposal.

It is possible that a new FCC could undo this new independence. Commissioners could marginalize their economists, take their directions from partisans, and reintroduce the practice of hiding information from the public. But Chairman Pai foresaw this and carefully made his changes part of the institutional structure of the FCC, making any steps backward visible to all concerned.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Harold Feld is senior vice president of Public Knowledge.]

Chairman Ajit Pai prioritized making new spectrum available for 5G. To his credit, he succeeded. Over the course of four years, Chairman Pai made available more high-band and mid-band spectrum, for licensed use and unlicensed use, than any other Federal Communications Commission chairman. He did so in the face of unprecedented opposition from other federal agencies, navigating the chaotic currents of the Trump administration with political acumen and courage. The Pai FCC will go down in history as the 5G FCC, and as the chairman who protected the primacy of FCC control over commercial spectrum policy.

At the same time, the Pai FCC will also go down in history as the most conventional FCC on spectrum policy in the modern era. Chairman Pai undertook no sweeping review of spectrum policy in the manner of former Chairman Michael Powell and no introduction of new and radically different spectrum technologies such as the introduction of unlicensed spectrum and spread spectrum in the 1980s, or the introduction of auctions in the 1990s. To the contrary, Chairman Pai actually rolled back the experimental short-term license structure adopted in the 3.5 GHz Citizens Broadband Radio Service (CBRS) band and replaced it with the conventional long-term with renewal expectation license. He missed a once-in-a-lifetime opportunity to dramatically expand the availability of unlicensed use of the TV white spaces (TVWS) via repacking after the television incentive auction. In reworking the rules for the 2.5 GHz band, although Pai laudably embraced the recommendation to create an application window for rural tribal lands, he rejected the proposal to allow nonprofits a chance to use the band for broadband in favor of conventional auction policy.

Ajit Pai’s Spectrum Policy Gave the US a Strong Position for 5G and Wi-Fi 6

To fully appreciate Chairman Pai’s accomplishments, we must first fully appreciate the urgency of opening new spectrum, and the challenges Pai faced from within the Trump administration itself. While providers can (and should) repurpose spectrum from older technologies to newer technologies, successful widespread deployment can only take place when sufficient amounts of new spectrum become available. This “green field” spectrum allows providers to build out new technologies with the most up-to-date equipment without disrupting existing subscriber services. The protocols developed for mobile 5G services work best with “mid-band” spectrum (generally considered to be frequencies between 2 GHz and 6 GHz). At the time Pai became chairman, the FCC did not have any mid-band spectrum identified for auction.

In addition, spectrum available for unlicensed use has become increasingly congested as more and more services depend on Wi-Fi and other unlicensed applications. Indeed, we have become so dependent on Wi-Fi for home broadband and networking that people routinely talk about buying “Wi-Fi” from commercial broadband providers rather than buying “internet access.” The United States further suffered a serious disadvantage moving forward to next generation Wi-Fi, Wi-Fi 6, because the U.S. lacked a contiguous block of spectrum large enough to take advantage of Wi-Fi 6’s gigabit capabilities. Without gigabit Wi-Fi, Americans will increasingly be unable to use the applications that gigabit broadband to the home makes possible.

But virtually all spectrum—particularly mid-band spectrum—have significant incumbents. These incumbents include federal users, particularly the U.S. Department of Defense. Finding new spectrum optimal for 5G required reclaiming spectrum from these incumbents. Unlicensed services do not require relocating incumbent users but creating such “underlay” unlicensed spectrum access requires rules to prevent unlicensed operations from causing harmful interference to licensed services. Needless to say, incumbent services fiercely resist any change in spectrum-allocation rules, claiming that reducing their spectrum allocation or permitting unlicensed services will compromise valuable existing services, while simultaneously causing harmful interference.

The need to reallocate unprecedented amounts of spectrum to ensure successful 5G and Wi-Fi 6 deployment in the United States created an unholy alliance of powerful incumbents, commercial and federal, dedicated to blocking FCC action. Federal agencies—in violation of established federal spectrum policy—publicly challenged the FCC’s spectrum-allocation decisions. Powerful industry incumbents—such as the auto industry, the power industry, and defense contractors—aggressively lobbied Congress to reverse the FCC’s spectrum action by legislation. The National Telecommunications and Information Agency (NTIA), the federal agency tasked with formulating federal spectrum policy, was missing in action as it rotated among different acting agency heads. As the chair and ranking member of the House Commerce Committee noted, this unprecedented and very public opposition by federal agencies to FCC spectrum policy threatened U.S. wireless interests both domestically and internationally.

Navigating this hostile terrain required Pai to exercise both political acumen and political will. Pai accomplished his goal of reallocating 600 MHz of spectrum for auction, opening over 1200 MHz of contiguous spectrum for unlicensed use, and authorized the new entrant Ligado Networks over the objections of the DOD. He did so by a combination of persuading President Donald Trump of the importance of maintaining U.S. leadership in 5G, and insisting on impeccable analysis by the FCC’s engineers to provide support for the reallocation and underlay decisions. On the most significant votes, Pai secured support (or partial support) from the Democrats. Perhaps most importantly, Pai successfully defended the institutional role of the FCC as the ultimate decisionmaker on commercial spectrum use, not subject to a “heckler’s veto” by other federal agencies.

Missed Innovation, ‘Command and Control Lite

While acknowledging Pai’s accomplishments, a fair consideration of Pai’s legacy must also consider his shortcomings. As chairman, Pai proved the most conservative FCC chair on spectrum policy since the 1980s. The Reagan FCC produced unlicensed and spread spectrum rules. The Clinton FCC created the spectrum auction regime. The Bush FCC included a spectrum task force and produced the concept of database management for unlicensed services, creating the TVWS and laying the groundwork for CBRS in the 3.5 GHz band. The Obama FCC recommended and created the world’s first incentive auction.

The Trump FCC does more than lack comparable accomplishments; it actively rolled back previous innovations. Within the first year of his chairmanship, Pai began a rulemaking designed to roll back the innovative priority access licensing (PALs). Under the rules adopted under the previous chairman, PALs provided exclusive use on a census block basis for three years with no expectation of renewal. Pai delayed the rollout of CBRS for two years to replace this approach with a standard license structure of 10 years with an expectation of renewal, explicitly to facilitate traditional carrier investment in traditional networks. Pai followed the same path when restructuring the 2.5 GHz band. While laudably creating a window for Native Americans to apply for 2.5 GHz licenses on rural tribal lands, Pai rejected proposals from nonprofits to adopt a window for non-commercial providers to offer broadband. Instead, he simply eliminated the educational requirement and adopted a standard auction for distribution of remaining licenses.

Similarly, in the unlicensed space, Pai consistently declined to promote innovation. In the repacking following the broadcast incentive auction, Pai rejected the proposal of structuring the repacking to ensure usable TVWS in every market. Instead, under Pai, the FCC managed the repacking so as to minimize the burden on incumbent primary and secondary licensees. As a result, major markets such as Los Angeles have zero channels available for unlicensed TVWS operation. This effectively relegates the service to a niche rural service, augmenting existing rural wireless ISPs.

The result is a modified form of “command and control,” the now-discredited system where the FCC would allocate licenses to provide specific services such as “FM radio” or “mobile pager service.” While preserving license flexibility in name, the licensing rules are explicitly structured to promote certain types of investment and business cases. The result is to encourage the same types of licensees to offer improved and more powerful versions of the same types of services, while discouraging more radical innovations.

Conclusion

Chairman Pai can rightly take pride in his overall 5G legacy. He preserved the institutional role of the FCC as the agency responsible for expanding our nation’s access to wireless services against sustained attack by federal agencies determined to protect their own spectrum interests. He provided enough green field spectrum for both licensed services and unlicensed services to permit the successful deployment of 5G and Wi-Fi 6. At the same time, however, he failed to encourage more radical spectrum policies that have made the United States the birthplace of such technologies as mobile broadband and Wi-Fi. We have won the “race” to next generation wireless, but the players and services are likely to stay the same.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

A boy throws a brick through a bakeshop window. He flees and is never identified. The townspeople gather around the broken glass. “Well,” one of them says to the furious baker, “at least this will generate some business for the windowmaker!”

A reasonable statement? Not really. Although it is indeed a good day for the windowmaker, the money for the new window comes from the baker. Perhaps the baker was planning to use that money to buy a new suit. Now, instead of owning a window and a suit, he owns only a window. The windowmaker’s gain, meanwhile, is simply the tailor’s loss.

This parable of the broken window was conceived by Frédéric Bastiat, a nineteenth-century French economist. He wanted to alert the reader to the importance of opportunity costs—in his words, “that which is not seen.” Time and money spent on one activity cannot be spent on another.

Today Bastiat might tell the parable of the harassed technology company. A tech firm creates a revolutionary new product or service and grows very large. Rivals, lawyers, activists, and politicians call for an antitrust probe. Eventually they get their way. Millions of documents are produced, dozens of depositions are taken, and several hearings are held. In the end no concrete action is taken. “Well,” the critics say, “at least other companies could grow while the firm was sidetracked by the investigation!”

Consider the antitrust case against Microsoft twenty years ago. The case ultimately settled, and Microsoft agreed merely to modify minor aspects of how it sold its products. “It’s worth wondering,” writes Brian McCullough, a generally astute historian of the internet, “how much the flowering of the dot-com era was enabled by the fact that the most dominant, rapacious player in the industry was distracted while the new era was taking shape.” “It’s easy to see,” McCullough says, “that the antitrust trial hobbled Microsoft strategically, and maybe even creatively.”

Should we really be glad that an antitrust dispute “distracted” and “hobbled” Microsoft? What would a focused and unfettered Microsoft have achieved? Maybe nothing; incumbents often grow complacent. Then again, Microsoft might have developed a great search engine or social-media platform. Or it might have invented something that, thanks to the lawsuit, remains absent to this day. What Microsoft would have created in the early 2000s, had it not had to fight the government, is that which is not seen.

But doesn’t obstructing the most successful companies create “room” for new competitors? David Cicilline, the chairman of the House’s antitrust subcommittee, argues that “just pursuing the [Microsoft] enforcement action itself” made “space for an enormous amount of additional innovation and competition.” He contends that the large tech firms seek to buy promising startups before they become full-grown threats, and that such purchases must be blocked.

It’s easy stuff to say. It’s not at all clear that it’s true or that it makes sense. Hindsight bias is rampant. In 2012, for example, Facebook bought Instagram for $1 billion, a purchase that is now cited as a quintessential “killer acquisition.” At the time of the sale, however, Instagram had 27 million users and $0 in revenue. Today it has around a billion users, it is estimated to generate $7 billion in revenue each quarter, and it is worth perhaps $100 billion. It is presumptuous to declare that Instagram, which had only 13 employees in 2012, could have achieved this success on its own.

If distraction is an end in itself, last week’s Big Tech hearing before Cicilline and his subcommittee was a smashing success. Presumably Jeff Bezos, Tim Cook, Sundar Pichai, and Mark Zuckerberg would like to spend the balance of their time developing the next big innovations and staying ahead of smart, capable, ruthless competitors, starting with each other and including foreign firms such as ByteDance and Huawei. Last week they had to put their aspirations aside to prepare for and attend five hours of political theater.

The most common form of exchange at the hearing ran as follows. A representative asks a slanted question. The witness begins to articulate a response. The representative cuts the witness off. The representative gives a prepared speech about how the witness’s answer proved her point.

Lucy Kay McBath, a first-term congresswoman from Georgia, began one such drill with the claim that Facebook’s privacy policy from 2004, when Zuckerberg was 20 and Facebook had under a million users, applies in perpetuity. “We do not and will not use cookies to collect private information from any users,” it said. Has Facebook broken its “promise,” McBath asked, not to use cookies to collect private information? No, Zuckerberg explained (letting the question’s shaky premise slide), Facebook uses only standard log-in cookies.

“So once again, you do not use cookies? Yes or no?” McBath interjected. Having now asked a completely different question, and gotten a response resembling what she wanted—“Yes, we use cookies [on log-in features]”—McBath could launch into her canned condemnation. “The bottom line here,” she said, reading from her page, “is that you broke a commitment to your users. And who can say whether you may or may not do that again in the future?” The representative pressed on with her performance, not noticing or not caring that the person she was pretending to engage with had upset her script.

Many of the antitrust subcommittee’s queries had nothing to do with antitrust. One representative fixated on Amazon’s ties with the Southern Poverty Law Center. Another seemed to want Facebook to interrogate job applicants about their political beliefs. A third asked Zuckerberg to answer for the conduct of Twitter. One representative demanded that social-media posts about unproven Covid-19 treatments be left up, another that they be taken down. Most of the questions that were at least vaguely on topic, meanwhile, were exceedingly weak. The representatives often mistook emails showing that tech CEOs play to win, that they seek to outcompete challengers and rivals, for evidence of anticompetitive harm to consumers. And the panel was often treated like a customer-service hotline. This app developer ran into a difficulty; what say you, Mr. Cook? That third-party seller has a gripe; why won’t you listen to her, Mr. Bezos?

In his opening remarks, Bezos cited a survey that ranked Amazon one of the country’s most trusted institutions. No surprise there. In many places one could have ordered a grocery delivery from Amazon as the hearing started and had the goods put away before it ended. Was Bezos taking a muted dig at Congress? He had every right to—it is one of America’s least trusted institutions. Pichai, for his part, noted that many users would be willing to pay thousands of dollars a year for Google’s free products. Is Congress providing people that kind of value?

The advance of technology will never be an unalloyed blessing. There are legitimate concerns, for instance, about how social-media platforms affect public discourse. “Human beings evolved to gossip, preen, manipulate, and ostracize,” psychologist Jonathan Haidt and technologist Tobias Rose-Stockwell observe. Social media exploits these tendencies, they contend, by rewarding those who trade in the glib put-down, the smug pronouncement, the theatrical smear. Speakers become “cruel and shallow”; “nuance and truth” become “casualties in [a] competition to gain the approval of [an] audience.”

Three things are true at once. First, Haidt and Rose-Stockwell have a point. Second, their point goes only so far. Social media does not force people to behave badly. Assuming otherwise lets individual humans off too easy. Indeed, it deprives them of agency. If you think it is within your power to display grace, love, and transcendence, you owe it to others to think it is within their power as well.

Third, if you really want to see adults act like children, watch a high-profile congressional hearing. A hearing for Attorney General William Barr, held the day before the Big Tech hearing and attended by many of the same representatives, was a classic of the format.

The tech hearing was not as shambolic as the Barr hearing. And the representatives act like sanctimonious halfwits in part to concoct the sick burns that attract clicks on the very platforms built, facilitated, and delivered by the tech companies. For these and other obvious reasons, no one should feel sorry for the four men who spent a Wednesday afternoon serving as props for demagogues. But that doesn’t mean the charade was a productive use of time. There is always that which is not seen.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]

The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.

“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.

Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.

But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”

This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.

At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.

The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.

When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.

When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”

It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”

In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.

True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.

COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.


[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Robert Litan, (Non-resident Senior Fellow, Economic Studies, The Brookings Institution; former Associate Director, Office of Management and Budget).]

We have moved well beyond testing as the highest priority for responding to the COVID disaster – although it remains important – to meeting the immediate peak demand for hospital equipment and ICU beds outside hospitals in most urban areas. President Trump recognized this being the case when he declared on March 18 that was acting as a “wartime President.”

While the President invoked the Defense Production Act to have the private sector produce more ventilators and other necessary medical equipment, such as respirators and hospital gowns, that Act principally provides for government purchases and the authority to allocate scarce supplies. 

As part of this effort, if it is not already in the works, the President should require manufacturers of such equipment – especially ventilators – to license at low or no royalties any and all intellectual property rights required for such production to as many other manufacturers that are willing and capable of making this equipment as rapidly as possible, 24/7. The President should further direct FDA to surge its inspector force to ensure that the processes and output of these other manufacturers are in compliance with applicable FDA requirements. The same IP licensing requirement should extend to manufacturers of any other medical supplies expected to be in short supply. 

To avoid price gouging – yes, this is one instance where market principles should be suspended – the declaration should cap the prices of future ventilators, including those manufactured by current suppliers, to the price pre-crisis. 

Second, to solve the bed shortage problem, some states (such New York) are already investigating the use of existing facilities – schools, university dorms, hotel rooms, and the like. This idea should be mandated immediately, as part of the emergency declaration, nationwide. The President has ordered a Navy hospital ship to help out with extra beds in New York, which is a good idea that should be extended to other coastal cities where this is possible. But he should also order the military, as needed, to assist with the conversion efforts of land-based facilities – which require infection-free environments, special filtration systems and the like – where private contractors are not available. 

The costs for all this should be borne by the federal government, using the Disaster Relief Fund, authorized by the Stafford Act. As of year-end FY 2019, the balance in this fund was approximately $30 billion. It is not clear what the balance is expected to be after the outlays that have recently been ordered by the President, as relief for states and localities. If the DRF needs topping up, this should be urgently provided by the Congress, ideally as part of the third round of fiscal stimulus being considered this week. 

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.

“As usual,” Lord Macaulay wrote in his history of England, “many persons” were “disposed to clamour against the innovation, simply because it was an innovation.” They objected that the express rides would corrupt traditional horsemanship, throw saddlers and boatmen out of work, bankrupt the roadside taverns, and force travelers to sit with children and the disabled. “It was gravely recommended,” reported Macaulay, by various towns and companies, that “no public coach should be permitted to have more than four horses, to start oftener that once a week, or to go more than thirty miles a day.”

Macaulay used the episode to offer his contemporaries a warning. Although “we smile at these things,” he said, “our descendants, when they read the history of the opposition offered by cupidity and prejudice to the improvements of the nineteenth century, may smile in their turn.” Macaulay wanted the smart set to take a wider view of history.

They rarely do. It is not in their nature. As Schumpeter understood, the “intellectual group” cannot help attacking “the foundations of capitalist society.” “It lives on criticism and its whole position depends on criticism that stings.”

An aspiring intellectual would do well to avoid restraint or good cheer. Better to build on a foundation of panic and indignation. Want to sell books and appear on television? Announce the “death” of this or a “crisis” over that. Want to seem fashionable among other writers, artists, and academics? Denounce greed and rail against “the system.”

New technology is always a good target. When a lantern inventor obtained a patent to light London, observed Macaulay, “the cause of darkness was not left undefended.” The learned technophobes have been especially vexed lately. The largest tech companies, they protest, are manipulating us.

Facebook, The New Republic declares, “remade the internet in its hideous image.” The New Yorker wonders whether the platform is going to “break democracy.”

Apple is no better. “Have smartphones destroyed a generation?” asks The Atlantic in a cover-story headline. The article’s author, Jean Twenge, says smartphones have made the young less independent, more reclusive, and more depressed. She claims that today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis in decades.” “Much of this deterioration,” she contends, “can be traced to their phones.”

And then there’s Amazon. It’s too efficient. Alex Salkever worries in Fortune that “too many clicks, too much time spent, and too much money spent on Amazon” is “bad for our collective financial, psychological, and physical health.”

Here’s a rule of thumb for the refined cultural critic to ponder. When the talking points you use to convey your depth and perspicacity match those of a sermonizing Republican senator, start worrying that your pseudo-profound TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn fears of novelty and change.

Enter Josh Hawley, freshman GOP senator from Missouri. Hawley claims that Facebook is a “digital drug” that “dulls” attention spans and “frays” relationships. He speculates about whether social media is causing teenage girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists, is “ever more sophisticated exploitation of people.” He scolds the tech companies for failing to produce products that—in his judgment—“enrich lives” and “strengthen society.”

As for the stuff the industry does make, Hawley wants it changed. He has introduced a bill to ban infinite scrolling, music and video autoplay, and the use of “badges and other awards” (gamification) on social media. The bill also requires defaults that limit a user’s time on a platform to 30 minutes a day. A user could opt out of this restriction, but only for a month at a stretch.

The available evidence does not bear out the notion that highbrow magazines, let alone Josh Hawley, should redesign tech products and police how people use their time. You’d probably have to pay someone around $500 to stay off Facebook for a year. Getting her to forego using Amazon would cost even more. And Google is worth more still—perhaps thousands of dollars per user per year. These figures are of course quite rough, but that just proves the point: the consumer surplus created by the internet is inestimable.

Is technology making teenagers sad? Probably not. A recent study tracked the social-media use, along with the wellbeing, of around ten-thousand British children for almost a decade. “In more than half of the thousands of statistical models we tested,” the study’s authors write, “we found nothing more than random statistical noise.” Although there were some small links between teenage girls’ mood and their social-media use, the connections were “miniscule” and too “trivial” to “inform personal parenting decisions.” “It’s probably best,” the researchers conclude, “to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing.”

One could head the other way, in fact, and argue that technology is making children smarter. Surfing the web and playing video games might broaden their attention spans and improve their abstract thinking.

Is Facebook a threat to democracy? Not yet. The memes that Russian trolls distributed during the 2016 election were clumsy, garish, illiterate piffle. Most of it was the kind of thing that only an Alex Jones fan or a QAnon conspiracist would take seriously. And sure enough, one study finds that only a tiny fraction of voters, most of them older conservatives, read and spread the material. It appears, in other words, that the Russian fake news and propaganda just bounced around among a few wingnuts whose support for Donald Trump was never in doubt.

Over time, it is fair to say, the known costs and benefits of the latest technological innovations could change. New data and further study might reveal that the handwringers are on to something. But there’s good news: if you have fears, doubts, or objections, nothing stops you from acting on them. If you believe that Facebook’s behavior is intolerable, or that its impact on society is malign, stop using it. If you think Amazon is undermining small businesses, shop more at local stores. If you fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you suspect that everything has gone pear-shaped since the Industrial Revolution started, throw out your refrigerator and stop going to the dentist.

We now hit the crux of the intellectuals’ (and Josh Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim, weak, and base. You lack the wits to use an iPhone on your own terms. You lack the self-control to post, “like”, and share in moderation (or the discipline to make your children follow suit). You lack the virtue to abstain from the pleasures of Prime-membership consumerism.

One AI researcher digs to the root. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or ‘I’m not on social media,’” she tells Vox. No one wields the “privilege” epithet quite like the modern privileged do. It is one of the remarkable features of our time. Pundits and professors use the word to announce, albeit unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection from too much information, too much choice, too much freedom.

There’s nothing crazy about wanting the new aristocrats of the mind to shepherd everyone else. Noblesse oblige is a venerable concept. The lords care for the peasants, the king cares for the lords, God cares for the king. But that is not our arrangement. Our forebears embraced the Enlightenment. They began with the assumption that citizens are autonomous. They got suspicious whenever the holders of political power started trying to tell those citizens what they can and cannot do.

Algorithms might one day expose, and play on, our innate lack of free will so much that serious legal and societal adjustments are needed. That, however, is a remote and hypothetical issue, one likely to fall on a generation, yet unborn, who will smile in their turn at our qualms. (Before you place much weight on more dramatic predictions, consider that the great Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)

The question today is more mundane: do voters crave moral direction from their betters? Are they clamoring to be viewed as lowly creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly capable of debasing themselves accordingly through their choice of political representatives. Judging from Congress’s flat response to Hawley’s bill, the electorate is not quite there yet.

In the meantime, the great and the good might reevaluate their campaign to infantilize their less fortunate brothers and sisters. Lecturing people about how helpless they are is not deep. It’s not cool. It’s condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned and priggish.

In 1816 The Times of London warned “every parent against exposing his daughter to so fatal a contagion” as . . . the waltz. “The novelty is one deserving of severe reprobation,” Britain’s paper of record intoned, “and we trust it will never again be tolerated in any moral English society.”

There was a time, Lord Macaulay felt sure, when some brahmin or other looked down his nose at the plough and the alphabet.

In a recent NY Times opinion piece, Tim Wu, like Elizabeth Holmes, lionizes Steve Jobs. Like Jobs with the iPod and iPhone, and Holmes with the Theranos Edison machine, Wu tells us we must simplify the public’s experience of complex policy into a simple box with an intuitive interface. In this spirit he argues that “what the public wants from government is help with complexity,” such that “[t]his generation of progressives … must accept that simplicity and popularity are not a dumbing-down of policy.”

This argument provides remarkable insight into the complexity problems of progressive thought. Three of these are taken up below: the mismatch of comparing the work of the government to the success of Jobs; the mismatch between Wu’s telling of and Jobs’s actual success; and the latent hypocrisy in Wu’s “simplicity for me, complexity for thee” argument.

Contra Wu’s argument, we need politicians that embrace and lay bare the complexity of policy issues. Too much of our political moment is dominated by demagogues on every side of policy debates offering simple solutions to simplified accounts of complex policy issues. We need public intellectuals, and hopefully politicians as well, to make the case for complexity. Our problems are complex and solutions to them hard (and sometimes unavailing). Without leaders willing to steer into complexity, we can never have a polity able to address complexity.

I. “Good enough for government work” isn’t good enough for Jobs

As an initial matter, there is a great deal of wisdom in Wu’s recognition that the public doesn’t want complexity. As I said at the annual Silicon Flatirons conference in February, consumers don’t want a VCR with lots of dials and knobs that let them control lots of specific features—they just want the damn thing to work. And as that example is meant to highlight, once it does work, most consumers are happy to leave well enough alone (as demonstrated by millions of clocks that would continue to blink 12:00 if VCRs weren’t so 1990s).

Where Wu goes wrong, though, is that he fails to recognize that despite this desire for simplicity, for two decades VCR manufacturers designed and sold VCRs with clocks that were never set—a persistent blinking to constantly remind consumers of their own inadequacies. Had the manufacturers had any insight into the consumer desire for simplicity, all those clocks would have been used for something—anything—other than a reminder that consumers didn’t know how to set them. (Though, to their credit, these devices were designed to operate as most consumers desired without imposing any need to set the clock upon them—a model of simplicity in basic operation that allows consumers to opt-in to a more complex experience.)

If the government were populated by visionaries like Jobs, Wu’s prescription would be wise. But Jobs was a once-in-a-generation thinker. No one in a generation of VCR designers had the insight to design a VCR without a clock (or at least a clock that didn’t blink in a constant reminder of the owner’s inability to set it). And similarly few among the ranks of policy designers are likely to have his abilities, either. On the other hand, the public loves the promise of easy solutions to complex problems. Charlatans and demagogues who would cast themselves in his image, like Holmes did with Theranos, can find government posts in abundance.

Of course, in his paean to offering the public less choice, Wu, himself an oftentime designer of government policy, compares the art of policy design to the work of Jobs—not of Holmes. But where he promises a government run in the manner of Apple, he would more likely give us one more in the mold of Theranos.

There is a more pernicious side to Wu’s argument. He speaks of respect for the public, arguing that “Real respect for the public involves appreciating what the public actually wants and needs,” and that “They would prefer that the government solve problems for them.” Another aspect of respect for the public is recognizing their fundamental competence—that progressive policy experts are not the only ones who are able to understand and address complexity. Most people never set their VCRs’ clocks because they felt no need to, not because they were unable to figure out how to do so. Most people choose not to master the intricacies of public policy. But this is not because the progressive expert class is uniquely able to do so. It is—as Wu notes, that most people do not have the unlimited time or attention that would be needed to do so—time and attention that is afforded to him by his social class.

Wu’s assertion that the public “would prefer that the government solve problems for them” carries echoes of Louis Brandeis, who famously said of consumers that they were “servile, self-indulgent, indolent, ignorant.” Such a view naturally gives rise to Wu’s assumption that the public wants the government to solve problems for them. It assumes that they are unable to solve those problems on their own.

But what Brandeis and progressives cast in his mold attribute to servile indolence is more often a reflection that hoi polloi simply do not have the same concerns as Wu’s progressive expert class. If they had the time to care about the issues Wu would devote his government to, they could likely address them on their own. The fact that they don’t is less a reflection of the public’s ability than of its priorities.

II. Jobs had no monopoly on simplicity

There is another aspect to Wu’s appeal to simplicity in design that is, again, captured well in his invocation of Steve Jobs. Jobs was exceptionally successful with his minimalist, simple designs. He made a fortune for himself and more for Apple. His ideas made Apple one of the most successful companies, with one of the largest user bases, in the history of the world.

Yet many people hate Apple products. Some of these users prefer to have more complex, customizable devices—perhaps because they have particularized needs or perhaps simply because they enjoy having that additional control over how their devices operate and the feeling of ownership that that brings. Some users might dislike Apple products because the interface that is “intuitive” to millions of others is not at all intuitive to them. As trivial as it sounds, most PC users are accustomed to two-button mice—transitioning to Apple’s one-button mouse is exceptionally  discomfitting for many of these users. (In fairness, the one-button mouse design used by Apple products is not attributable to Steve Jobs.) And other users still might prefer devices that are simple in other ways, so are drawn to other products that better cater to their precise needs.

Apple has, perhaps, experienced periods of market dominance with specific products. But this has never been durable—Apple has always faced competition. And this has ensured that those parts of the public that were not well-served by Jobs’s design choices were not bound to use them—they always had alternatives.

Indeed, that is the redeeming aspect of the Theranos story: the market did what it was supposed to. While too many consumers may have been harmed by Holmes’ charlatan business practices, the reality is that once she was forced to bring the company’s product to market it was quickly outed as a failure.

This is how the market works. Companies that design good products, like Apple, are rewarded; other companies then step in to compete by offering yet better products or by addressing other segments of the market. Some of those companies succeed; most, like Theranos, fail.

This dynamic simply does not exist with government. Government is a policy monopolist. A simplified, streamlined, policy that effectively serves half the population does not effectively serve the other half. There is no alternative government that will offer competing policy designs. And to the extent that a given policy serves part of the public better than others, it creates winners and losers.

Of course, the right response to the inadequacy of Wu’s call for more, less complex policy is not that we need more, more complex policy. Rather, it’s that we need less policy—at least policy being dictated and implemented by the government. This is one of the stalwart arguments we free market and classical liberal types offer in favor of market economies: they are able to offer a wider range of goods and services that better cater to a wider range of needs of a wider range of people than the government. The reason policy grows complex is because it is trying to address complex problems; and when it fails to address those problems on a first cut, the solution is more often than not to build “patch” fixes on top of the failed policies. The result is an ever-growing book of rules bound together with voluminous “kludges” that is forever out-of-step with the changing realities of a complex, dynamic world.

The solution to so much complexity is not to sweep it under the carpet in the interest of offering simpler, but only partial, solutions catered to the needs of an anointed subset of the public. The solution is to find better ways to address those complex problems—and often times it’s simply the case that the market is better suited to such solutions.

III. A complexity: What does Wu think of consumer protection?

There is a final, and perhaps most troubling, aspect to Wu’s argument. He argues that respect for the public does not require “offering complete transparency and a multiplicity of choices.” Yet that is what he demands of business. As an academic and government official, Wu has been a loud and consistent consumer protection advocate, arguing that consumers are harmed when firms fail to provide transparency and choice—and that the government must use its coercive power to ensure that they do so.

Wu derives his insight that simpler-design-can-be-better-design from the success of Jobs—and recognizes more broadly that the consumer experience of products of the technological revolution (perhaps one could even call it the tech industry) is much better today because of this simplicity than it was in earlier times. Consumers, in other words, can be better off with firms that offer less transparency and choice. This, of course, is intuitive when one recognizes (as Wu has) that time and attention are among the scarcest of resources.

Steve Jobs and Elizabeth Holmes both understood that the avoidance of complexity and minimizing of choices are hallmarks of good design. Jobs built an empire around this; Holmes cost investors hundreds of millions of dollars in her failed pursuit. But while Holmes failed where Jobs succeeded, her failure was not tragic: Theranos was never the only medical testing laboratory in the market and, indeed, was never more than a bit player in that market. For every Apple that thrives, the marketplace erases a hundred Theranoses. But we do not have a market of governments. Wu’s call for policy to be more like Apple is a call for most government policy to fail like Theranos. Perhaps where the challenge is to do more complex policy simply, the simpler solution is to do less, but simpler, policy well.

Conclusion

We need less dumbing down of complex policy in the interest of simplicity; and we need leaders who are able to make citizens comfortable with and understanding of complexity. Wu is right that good policy need not be complex. But the lesson from that is not that complex policy should be made simple. Rather, the lesson is that policy that cannot be made simple may not be good policy after all.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.