Archives For knowledge problem

In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.

While much of the commentary to date has been on whether Thomas got the legal analysis right, or on the uncomfortable fit of common-carriage law to social media, the deeper question of the First Amendment’s protection of private ordering has received relatively short shrift.

Conservatives’ main argument has been that Big Tech needs to be reined in because it is restricting the speech of private individuals. While conservatives traditionally have defended the state-action doctrine and the right to editorial discretion, they now readily find exceptions to both in order to justify regulating social-media companies. But those two First Amendment doctrines have long enshrined an important general principle: private actors can set the rules for speech on their own property. I intend to analyze this principle from a law & economics perspective and show how it benefits society.

Who Balances the Benefits and Costs of Speech?

Like virtually any other human activity, there are benefits and costs to speech and it is ultimately subjective individual preference that determines the value that speech has. The First Amendment protects speech from governmental regulation, with only limited exceptions, but that does not mean all speech is acceptable or must be tolerated. Under the state-action doctrine, the First Amendment only prevents the government from restricting speech.

Some purported defenders of the principle of free speech no longer appear to see a distinction between restraints on speech imposed by the government and those imposed by private actors. But this is surely mistaken, as no one truly believes all speech protected by the First Amendment should be without consequence. In truth, most regulation of speech has always come by informal means—social mores enforced by dirty looks or responsive speech from others.

Moreover, property rights have long played a crucial role in determining speech rules within any given space. If a man were to come into my house and start calling my wife racial epithets, I would not only ask that person to leave but would exercise my right as a property owner to eject the trespasser—if necessary, calling the police to assist me. I similarly could not expect to go to a restaurant and yell at the top of my lungs about political issues and expect them—even as “common carriers” or places of public accommodation—to allow me to continue.

As Thomas Sowell wrote in Knowledge and Decisions:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them―or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.

Knowledge and Decisions, p. 240

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to play the role. Asking government to use regulation to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—government by its nature can only hand down categorical guidelines: “you must allow x, y, and z speech.”

This is particularly relevant in the sphere of social media. Social-media companies are multi-sided platforms. They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users engaged.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech that are generally viewed negatively by users, even if the First Amendment would foreclose the government from regulating those same types of content. This is a good thing. Social-media companies balance the speech interests of different kinds of users to maximize the value of the platform and, in turn, to maximize benefits to all.

Herein lies the fundamental difference between private action and state action: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, it represents termination of a previously voluntary association. If the government kicks someone out of a public forum for expressing legal speech, that is coercion. The state-action doctrine recognizes this fundamental difference and creates a bright-line rule that courts may police when it comes to speech claims. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

Knowledge and Decisions, p. 244

Markets Produce the Best Moderation Policies

The First Amendment also protects the right of editorial discretion, which means publishers, platforms, and other speakers are free from carrying or transmitting government-compelled speech. Even a newspaper with near-monopoly power cannot be compelled by a right-of-reply statute to carry responses by political candidates to editorials it has published. In other words, not only is private regulation of speech not state action, but in many cases, private regulation is protected by the First Amendment.

There is no reason to think that social-media companies today are in a different position than was the newspaper in Miami Herald v. Tornillo. These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects the moderation decisions of social-media companies, its benefits accrue to society at-large.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them. How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes. Market competition, not government power, has enabled internet users (including conservatives!) to have more avenues than ever to get their message out.

Many conservatives remain unpersuaded by the power of markets in this case. They see multiple platforms all engaging in very similar content-moderation policies when it comes to certain touchpoint issues, and thus allege widespread anti-conservative bias and collusion. Neither of those claims have much factual support, but more importantly, the similarity of content-moderation standards may simply be common responses to similar demand structures—not some nefarious and conspiratorial plot.

In other words, if social-media users demand less of the kinds of content commonly considered to be hate speech, or less misinformation on certain important issues, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that forcing them to carry all “legal” speech—which would include not just misinformation and hate speech, but pornographic material, as well—would better serve social-media users. There are always alternative means to debate contestable issues of the day, even if it may be more costly to access them.

Indeed, that content-moderation policies make it more difficult to communicate some messages is precisely the point of having them. There is a subset of protected speech to which many users do not wish to be subject. Moreover, there is no inherent right to have an audience on a social-media platform.

Conclusion

Much of the First Amendment’s economic value lies in how it defines roles in the market for speech. As a general matter, it is not the government’s place to determine what speech should be allowed in private spaces. Instead, the private ordering of speech emerges through the application of social mores and property rights. This benefits society, as it allows individuals to create voluntary relationships built on marginal decisions about what speech is acceptable when and where, rather than centralized decisions made by a governing few and that are difficult to change over time.

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, ICLE); Eric Fruits (Chief Economist, ICLE; Adjunct Professor of Economics, Portland State University); and Kristian Stout (Associate Director, ICLE

The COVID-19 pandemic is changing the way consumers shop and the way businesses sell. These shifts in behavior, designed to “flatten the curve” of infection through social distancing, are happening across many (if not all) markets. But in many cases, it’s impossible to know now whether these new habits are actually achieving the desired effect. 

Take a seemingly silly example from Oregon. The state is one of only two in the U.S. that prohibits self-serve gas. In response to COVID-19, the state fire marshall announced it would temporarily suspend its enforcement of the prohibition. Public opinion fell into two broad groups. Those who want the option to pump their own gas argue that self-serve reduces the interaction between station attendants and consumers, thereby potentially reducing the spread of coronavirus. On the other hand, those who support the prohibition on self-serve have blasted the fire marshall’s announcement, arguing that all those dirty fingers pressing keypads and all those grubby hands on fuel pumps will likely increase the spread of the virus. 

Both groups may be right, but no one yet knows the net effect. We can only speculate. This picture becomes even more complex when considering other, alternative policies. For instance, would it be more effective for the state of Oregon to curtail gas station visits by forcing the closure of stations? Probably not. Would it be more effective to reduce visits through some form of rationing? Maybe. Maybe not. 

Policymakers will certainly struggle to efficiently decide how firms and consumers should minimize the spread of COVID-19. That struggle is an extension of Hayek’s knowledge problem: policymakers don’t have adequate knowledge of alternatives, preferences, and the associated risks. 

A Hayekian approach — relying on bottom-up rather than top-down solutions to the problem — may be the most appropriate solution. Allowing firms to experiment and iteratively find solutions that work for their consumers and employees (potentially adjusting prices and wages in the process) may be the best that policymakers can do.

The case of online retail platforms

One area where these complex tradeoffs are particularly acute is that of online retail. In response to the pandemic, many firms have significantly boosted their online retail capacity. 

These initiatives have been met with a mix of enthusiasm and disapproval. On the one hand online retail enables consumers to purchase “essential” goods with a significantly reduced risk of COVID-19 contamination. It also allows “non-essential” goods to be sold, despite the closure of their brick and mortar stores. At first blush, this seems like a win-win situation for both consumers and retailers of all sizes, with large retailers ramping up their online operations and independent retailers switching to online platforms such as Amazon.

But there is a potential downside. Even contactless deliveries do present some danger, notably for warehouse workers who run the risk of being infected and subsequently passing the virus on to others. This risk is amplified by the fact that many major retailers, including Walmart, Kroger, CVS, and Albertsons, are hiring more warehouse and delivery workers to meet an increase in online orders. 

This has led some to question whether sales of “non-essential” goods (though the term is almost impossible to define) should be halted. The reasoning is that continuing to supply such goods needlessly puts lives at risk and reduces overall efforts to slow the virus.

Once again, these are incredibly complex questions. It is hard to gauge the overall risk of infection that is produced by the online retail industry’s warehousing and distribution infrastructure. In particular, it is not clear how effective social distancing policies, widely imposed within these workplaces, will be at achieving distancing and, in turn, reducing infections. 

More fundamentally, whatever this risk turns out to be, it is almost impossible to weigh it against an appropriate counterfactual. 

Online retail is not the only area where this complex tradeoff arises. An analogous reasoning could, for instance, also be applied to food delivery platforms. Ordering a meal on UberEats does carry some risk, but so does repeated trips to the grocery store. And there are legitimate concerns about the safety of food handlers working in close proximity to each other.  These considerations make it hard for policymakers to strike the appropriate balance. 

The good news: at least some COVID-related risks are being internalized

But there is also some good news. Firms, consumers and employees all have some incentive to mitigate these risks. 

Consumers want to purchase goods without getting contaminated; employees want to work in safe environments; and firms need to attract both consumers and employees, while minimizing potential liability. These (partially) aligned incentives will almost certainly cause these economic agents to take at least some steps that mitigate the spread of COVID-19. This might notably explain why many firms imposed social distancing measures well before governments started to take notice (here, here, and here). 

For example, one first-order effect of COVID-19 is that it has become more expensive for firms to hire warehouse workers. Not only have firms moved up along the supply curve (by hiring more workers), but the curve itself has likely shifted upwards reflecting the increased opportunity cost of warehouse work. Predictably, this has resulted in higher wages for workers. For example, Amazon and Walmart recently increased the wages they were paying warehouse workers, as have brick and mortar retailers, such as Kroger, who have implemented similar policies.

Along similar lines, firms and employees will predictably bargain — through various channels — over the appropriate level of protection for those workers who must continue to work in-person.

For example, some companies have found ways to reduce risk while continuing operations:

  • CNBC reports Tyson Foods is using walk-through infrared body temperature scanners to check employees’ temperatures as they enter three of the company’s meat processing plants. Other companies planning to use scanners include Goldman Sachs, UPS, Ford, and Carnival Cruise Lines.
  • Kroger’s Fred Meyer chain of supermarkets is limiting the number of customers in each of its stores to half the occupancy allowed under international building codes. Kroger will use infrared sensors and predictive analytics to monitor the new capacity limits. The company already uses the technology to estimate how many checkout lanes are needed at any given time.
  • Trader Joe’s limits occupancy in its store. Customers waiting to enter are asked to stand six feet apart using marked off Trader Joe’s logos on the sidewalk. Shopping carts are separated into groups of “sanitized” and “to be cleaned.” Each cart is thoroughly sprayed with disinfectant and wiped down with a clean cloth.

In other cases, bargaining over the right level of risk-mitigation has been pursued through more coercive channels, such as litigation and lobbying:

  • A recently filed lawsuit alleges that managers at an Illinois Walmart store failed to alert workers after several employees began showing symptoms of COVID-19. The suit claims Walmart “had a duty to exercise reasonable care in keeping the store in a safe and healthy environment and, in particular, to protect employees, customers and other individuals within the store from contracting COVID-19 when it knew or should have known that individuals at the store were at a very high risk of infection and exposure.” 
  • According to CNBC, a group of legislators, unions and Amazon employees in New York wrote a letter to CEO Jeff Bezos calling on him to enact greater protections for warehouse employees who continue to work during the coronavirus outbreak. The Financial Times reports worker protests at Amazon warehouse in the US, France, and Italy. Worker protests have been reported at a Barnes & Noble warehouse. Several McDonald’s locations have been hit with strikes.
  • In many cases, worker concerns about health and safety have been conflated with long-simmering issues of unionization, minimum wage, flexible scheduling, and paid time-off. For example, several McDonald’s strikes were reported to have been organized by “Fight for $15.”

Sometimes, there is simply no mutually-advantageous solution. And businesses are thus left with no other option than temporarily suspending their activities: 

  • For instance, McDonalds and Burger King have spontaneously closed their restaurants — including drive-thru and deliveries — in many European countries (here and here).
  • In Portland, Oregon, ChefStable a restaurant group behind some of the city’s best-known restaurants, closed all 20 of its bars and restaurants for at least four weeks. In what he called a “crisis of conscience,” owner Kurt Huffman concluded it would be impossible to maintain safe social distancing for customers and staff.

This is certainly not to say that all is perfect. Employers, employees and consumers may have very strong disagreements about what constitutes the appropriate level of risk mitigation.

Moreover, the questions of balancing worker health and safety with that of consumers become all the more complex when we recognize that consumers and businesses are operating in a dynamic environment, making sometimes fundamental changes to reduce risk at many levels of the supply chain.

Likewise, not all businesses will be able to implement measures that mitigate the risk of COVID-19. For instance, “Big Business” might be in a better position to reduce risks to its workforce than smaller businesses. 

Larger firms tend to have the resources and economies of scale to make capital investments in temperature scanners or sensors. They have larger workforces where employees can, say, shift from stocking shelves to sanitizing shopping carts. Several large employers, including Amazon, Kroger, and CVS have offered higher wages to employees who are more likely to be exposed to the coronavirus. Smaller firms are less likely to have the resources to offer such wage premiums.

For example, Amazon recently announced that it would implement mandatory temperature checks, that it would provide employees with protective equipment, and that it would increase the frequency and intensity of cleaning for all its sites. And, as already mentioned above, Tyson Foods announced that they would install temperature scanners at a number of sites. It is not clear whether smaller businesses are in a position to implement similar measures. 

That’s not to say that small businesses can’t adjust. It’s just more difficult. For example, a small paint-your-own ceramics shop, Mimosa Studios, had to stop offering painting parties because of government mandated social distancing. One way it’s mitigating the loss of business is with a paint-at-home package. Customers place an order online, and the studio delivers the ceramic piece, paints, and loaner brushes. When the customer is finished painting, Mimosa picks up the piece, fires it, and delivers the finished product. The approach doesn’t solve the problem, but it helps mitigate the losses.

Conclusion

In all likelihood, we can’t actually avoid all bad outcomes. There is, of course, some risk associated with even well-resourced large businesses continuing to operate, even though some of them play a crucial role in coronavirus-related lockdowns. 

Currently, market actors are working within the broad outlines of lockdowns deemed necessary by policymakers. Given the intensely complicated risk calculation necessary to determine if any given individual truly needs an “essential” (or even a “nonessential”) good or service, the best thing that lawmakers can do for now is let properly motivated private actors continue to seek optimal outcomes together within the imposed constraints. 

So far, most individuals and the firms serving them are at least partially internalizing Covid-related risks. The right approach for lawmakers would be to watch this process and determine where it breaks down. Measures targeted to fix those breaches will almost inevitably outperform interventionist planning to determine exactly what is essential, what is nonessential, and who should be allowed to serve consumers in their time of need.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]

The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.

“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.

Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.

But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”

This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.

At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.

The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.

When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.

When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”

It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”

In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.

True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.

COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.


Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

My new book, How to Regulate: A Guide for Policymakers, will be published in a few weeks.  A while back, I promised a series of posts on the book’s key chapters.  I posted an overview of the book and a description of the book’s chapter on externalities.  I then got busy on another writing project (on horizontal shareholdings—more on that later) and dropped the ball.  Today, I resume my book summary with some thoughts from the book’s chapter on public goods.

With most goods, the owner can keep others from enjoying what she owns, and, if one person enjoys the good, no one else can do so.  Consider your coat or your morning cup of Starbucks.  You can prevent me from wearing your coat or drinking your coffee, and if you choose to let me wear the coat or drink the coffee, it’s not available to anyone else.

There are some amenities, though, that are “non-excludable,” meaning that the owner can’t prevent others from enjoying them, and “non-rivalrous,” meaning that one person’s consumption of them doesn’t prevent others from enjoying them as well.  National defense and local flood control systems (levees, etc.) are like this.  So are more mundane things like public art projects and fireworks displays.  Amenities that are both non-excludable and non-rivalrous are “public goods.”

[NOTE:  Amenities that are either non-excludable or non-rivalrous, but not both, are “quasi-public goods.”  Such goods include excludable but non-rivalrous “club goods” (e.g., satellite radio programming) and non-excludable but rivalrous “commons goods” (e.g., public fisheries).  The public goods chapter of How to Regulate addresses both types of quasi-public goods, but I won’t discuss them here.]

The primary concern with public goods is that they will be underproduced.  That’s because the producer, who must bear all the cost of producing the good, cannot exclude benefit recipients who do not contribute to the good’s production and thus cannot capture many of the benefits of his productive efforts.

Suppose, for example, that a levee would cost $5 million to construct and would create $10 million of benefit by protecting 500 homeowners from expected losses of $20,000 each (i.e., the levee would eliminate a 10% chance of a big flood that would cause each homeowner a $200,000 loss).  To maximize social welfare, the levee should be built.  But no single homeowner has an incentive to build the levee.  At least 250 homeowners would need to combine their resources to make the levee project worthwhile for participants (250 * $20,000 in individual benefit = $5 million), but most homeowners would prefer to hold out and see if their neighbors will finance the levee project without their help.  The upshot is that the levee never gets built, even though its construction is value-enhancing.

Economists have often jumped from the observation that public goods are susceptible to underproduction to the conclusion that the government should tax people and use the revenues to provide public goods.  Consider, for example, this passage from a law school textbook by several renowned economists:

It is apparent that public goods will not be adequately supplied by the private sector. The reason is plain: because people can’t be excluded from using public goods, they can’t be charged money for using them, so a private supplier can’t make money from providing them. … Because public goods are generally not adequately supplied by the private sector, they have to be supplied by the public sector.

[Howell E. Jackson, Louis Kaplow, Steven Shavell, W. Kip Viscusi, & David Cope, Analytical Methods for Lawyers 362-63 (2003) (emphasis added).]

That last claim seems demonstrably false.   Continue Reading…

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

Section 5 of the Federal Trade Commission Act proclaims that “[u]nfair methods of competition . . . are hereby declared unlawful.” The FTC has exclusive authority to enforce that provision and uses it to prosecute Sherman Act violations. The Commission also uses the provision to prosecute conduct that doesn’t violate the Sherman Act but is, in the Commission’s view, an “unfair method of competition.”

That’s somewhat troubling, for “unfairness” is largely in the eye of the beholder. One FTC Commissioner recently defined an unfair method of competition as an action that is “‘collusive, coercive, predatory, restrictive, or deceitful,’ or otherwise oppressive, [where the actor lacks] a justification grounded in its legitimate, independent self-interest.” Some years ago, a commissioner observed that a “standalone” Section 5 action (i.e., one not premised on conduct that would violate the Sherman Act) could be used to police “social and environmental harms produced as unwelcome by-products of the marketplace: resource depletion, energy waste, environmental contamination, worker alienation, the psychological and social consequences of producer-stimulated demands.” While it’s unlikely that any FTC Commissioner would go that far today, the fact remains that those subject to Section 5 really don’t know what it forbids.  And that situation flies in the face of the Rule of Law, which at a minimum requires that those in danger of state punishment know in advance what they’re not allowed to do.

In light of this fundamental Rule of Law problem (not to mention the detrimental chilling effect vague competition rules create), many within the antitrust community have called for the FTC to provide guidance on the scope of its “unfair methods of competition” authority. Most notably, two members of the five-member FTC—Commissioners Maureen Ohlhausen and Josh Wright—have publicly called for the Commission to promulgate guidelines. So have former FTC Chairman Bill Kovacic, a number of leading practitioners, and a great many antitrust scholars.

Unfortunately, FTC Chairwoman Edith Ramirez has opposed the promulgation of Section 5 guidelines. She says she instead “favor[s] the common law approach, which has been a mainstay of American antitrust policy since the turn of the twentieth century.” Chairwoman Ramirez observes that the common law method has managed to distill workable liability rules from broad prohibitions in the primary antitrust statutes. Section 1 of the Sherman Act, for example, provides that “[e]very contract, combination … or conspiracy, in restraint of trade … is declared to be illegal.” Section 2 prohibits actions to “monopolize, or attempt to monopolize … any part of … trade.” Clayton Act Section 7 forbids any merger whose effect “may be substantially to lessen competition, or tend to create a monopoly.” Just as the common law transformed these vague provisions into fairly clear liability rules, the Chairwoman says, it can be used to provide adequate guidance on Section 5.

The problem is, there is no Section 5 common law. As Commissioner Wright and his attorney-advisor Jan Rybnicek explain in a new paper, development of a common law—which concededly may be preferable to a prescriptive statutory approach, given its flexibility, ability to evolve with new learning, and sensitivity to time- and place-specific factors—requires certain conditions that do not exist in the Section 5 context.

The common law develops and evolves in a salutary direction because (1) large numbers of litigants do their best to persuade adjudicators of the superiority of their position; (2) the closest cases—those requiring the adjudicator to make fine distinctions—get appealed and reported; (3) the adjudicators publish opinions that set forth all relevant facts, the arguments of the parties, and why one side prevailed over the other; (4) commentators criticize published opinions that are unsound or rely on welfare-reducing rules; (5) adjudicators typically follow past precedents, tweaking (or occasionally overruling) them when they have been undermined; and (6) future parties rely on past decisions when planning their affairs.

Section 5 “adjudication,” such as it is, doesn’t look anything like this. Because the Commission has exclusive authority to bring standalone Section 5 actions, it alone picks the disputes that could form the basis of any common law. It then acts as both prosecutor and judge in the administrative action that follows. Not surprisingly, defendants, who cannot know the contours of a prohibition that will change with the composition of the Commission and who face an inherently biased tribunal, usually settle quickly. After all, they are, in Commissioner Wright’s words, both “shooting at a moving target and have the chips stacked against them.” As a result, we end up with very few disputes, and even those are not vigorously litigated.

Moreover, because nearly all standalone Section 5 actions result in settlements, we almost never end up with a reasoned opinion from an adjudicator explaining why she did or did not find liability on the facts at hand and why she rejected the losing side’s arguments. These sorts of opinions are absolutely crucial for the development of the common law. Chairwoman Ramirez says litigants can glean principles from other administrative documents like complaints and consent agreements, but those documents can’t substitute for a reasoned opinion that parses arguments and says which work, which don’t, and why. On top of all this, the FTC doesn’t even treat its own enforcement decisions as precedent! How on earth could the Commission’s body of enforcement decisions guide decision-making when each could well be a one-off?

I’m a huge fan of the common law. It generally accommodates the Hayekian “knowledge problem” far better than inflexible, top-down statutes. But it requires both inputs—lots of vigorously litigated disputes—and outputs—reasoned opinions that are recognized as presumptively binding. In the Section 5 context, we’re short on both. It’s time for guidelines.

UPDATE: I’ve been reliably informed that Vint Cerf coined the term “permissionless innovation,” and, thus, that he did so with the sorts of private impediments discussed below in mind rather than government regulation. So consider the title of this post changed to “Permissionless innovation SHOULD not mean ‘no contracts required,'” and I’ll happily accept that my version is the “bastardized” version of the term. Which just means that the original conception was wrong and thank god for disruptive innovation in policy memes!

Can we dispense with the bastardization of the “permissionless innovation” concept (best developed by Adam Thierer) to mean “no contracts required”? I’ve been seeing this more and more, but it’s been around for a while. Some examples from among the innumerable ones out there:

Vint Cerf on net neutrality in 2009:

We believe that the vast numbers of innovative Internet applications over the last decade are a direct consequence of an open and freely accessible Internet. Many now-successful companies have deployed their services on the Internet without the need to negotiate special arrangements with Internet Service Providers, and it’s crucial that future innovators have the same opportunity. We are advocates for “permissionless innovation” that does not impede entrepreneurial enterprise.

Net neutrality is replete with this sort of idea — that any impediment to edge providers (not networks, of course) doing whatever they want to do at a zero price is a threat to innovation.

Chet Kanojia (Aereo CEO) following the Aereo decision:

It is troubling that the Court states in its decision that, ‘to the extent commercial actors or other interested entities may be concerned with the relationship between the development and use of such technologies and the Copyright Act, they are of course free to seek action from Congress.’ (Majority, page 17)That begs the question: Are we moving towards a permission-based system for technology innovation?

At least he puts it in the context of the Court’s suggestion that Congress pass a law, but what he really wants is to not have to ask “permission” of content providers to use their content.

Mike Masnick on copyright in 2010:

But, of course, the problem with all of this is that it goes back to creating permission culture, rather than a culture where people freely create. You won’t be able to use these popular or useful tools to build on the works of others — which, contrary to the claims of today’s copyright defenders, is a key component in almost all creativity you see out there — without first getting permission.

Fair use is, by definition, supposed to be “permissionless.” But the concept is hardly limited to fair use, is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright (see, e.g., Mike Masnick again), which otherwise requires those pernicious licenses (i.e., permission) from others.

The point is, when we talk about permissionless innovation for Tesla, Uber, Airbnb, commercial drones, online data and the like, we’re talking (or should be) about ex ante government restrictions on these things — the “permission” at issue is permission from the government, it’s the “permission” required to get around regulatory roadblocks imposed via rent-seeking and baseless paternalism. As Gordon Crovitz writes, quoting Thierer:

“The central fault line in technology policy debates today can be thought of as ‘the permission question,'” Mr. Thierer writes. “Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations?”

But it isn’t (or shouldn’t be) about private contracts.

Just about all human (commercial) activity requires interaction with others, and that means contracts and licenses. You don’t see anyone complaining about the “permission” required to rent space from a landlord. But that some form of “permission” may be required to use someone else’s creative works or other property (including broadband networks) is no different. And, in fact, it is these sorts of contracts (and, yes, the revenue that may come with them) that facilitates people engaging with other commercial actors to produce things of value in the first place. The same can’t be said of government permission.

Don’t get me wrong – there may be some net welfare-enhancing regulatory limits that might require forms of government permission. But the real concern is the pervasive abuse of these limits, imposed without anything approaching a rigorous welfare determination. There might even be instances where private permission, imposed, say, by a true monopolist, might be problematic.

But this idea that any contractual obligation amounts to a problematic impediment to innovation is absurd, and, in fact, precisely backward. Which is why net neutrality is so misguided. Instead of identifying actual, problematic impediments to innovation, it simply assumes that networks threaten edge innovation, without any corresponding benefit and with such certainty (although no actual evidence) that ex ante common carrier regulations are required.

“Permissionless innovation” is a great phrase and, well developed (as Adam Thierer has done), a useful concept. But its bastardization to justify interference with private contracts is unsupported and pernicious.