Archives For

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

My new book, How to Regulate: A Guide for Policymakers, is now available on Amazon.  Inform Santa!

The book, published by Cambridge University Press, attempts to fill what I think is a huge hole in legal education:  It focuses on the substance of regulation and sets forth principles for designing regulatory approaches that will maximize social welfare.

Lawyers and law professors obsess over process.  (If you doubt that, sit in on a law school faculty meeting sometime!) That obsession may be appropriate; process often determines substance.  Rarely, though, do lawyers receive training in how to design the substance of a rule or standard to address some welfare-reducing defect in private ordering.  That’s a shame, because lawyers frequently take the lead in crafting regulatory approaches.  They need to understand (1) why the unfortunate situation is occurring, (2) what options are available for addressing it, and (3) what are the downsides to each of the options.

Economists, of course, study those things.  But economists have their own blind spots.  Being unfamiliar with legal and regulatory processes, they often fail to comprehend how (1) government officials’ informational constraints and (2) special interests’ tendency to manipulate government power for private ends can impair a regulatory approach’s success.  (Economists affiliated with the Austrian and Public Choice schools are more attuned to those matters, but their insights are often ignored by the economists advising on regulatory approaches — see, e.g., the fine work of the Affordable Care Act architects.)

Enter How to Regulate.  The book endeavors to provide economic training to the lawyers writing rules and a sense of the “limits of law” to the economists advising them.

The book begins by setting forth an overarching goal for regulation (minimize the sum of error and decision costs) and a general plan for achieving that goal (think like a physician–identify the adverse symptom, diagnose the disease, consider the range of available remedies, and assess the side effects of each).  It then marches through six major bases for regulating: externalities, public goods, market power, information asymmetry, agency costs, and the cognitive and volitional quirks observed by behavioral economists.  For each of those bases for regulation, the book considers the symptoms that might justify a regulatory approach, the disease causing those symptoms (i.e., the underlying economics), the range of available remedies (the policy tools available), and the side effects of each (e.g., public choice concerns, mistakes from knowledge limitations).

I have been teaching How to Regulate this semester, and it’s been a blast.  Unfortunately, all of my students are in their last year of law school.  The book would be most meaningful, I think, to an upcoming second-year student.  It really lays out the basis for a number of areas of law beyond the common law:  environmental law, antitrust, corporate law, securities regulation, food labeling laws, consumer protection statutes, etc.

I was heartened to receive endorsements from a couple of very fine thinkers on regulation, both of whom have headed the Office of Information and Regulatory Affairs (the White House’s chief regulatory review body).  They also happen to occupy different spots on the ideological spectrum.

Judge Douglas Ginsburg of the D.C. Circuit wrote that the book “will be valuable for all policy wonks, not just policymakers.  It provides an organized and rigorous framework for analyzing whether and how inevitably imperfect regulation is likely to improve upon inevitably imperfect market outcomes.”

Harvard Law School’s Cass Sunstein wrote:  “This may well be the best guide, ever, to the regulatory state.  It’s brilliant, sharp, witty, and even-handed — and it’s so full of insights that it counts as a major contribution to both theory and practice.  Indispensable reading for policymakers all over the world, and also for teachers, students, and all those interested in what the shouting is really about.”

Bottom line:  There’s something for everybody in this book.  I wrote it because I think the ideas are important and under-studied.  And I really tried to make it as accessible (and occasionally funny!) as possible.

If you’re a professor and would be interested in a review copy for potential use in a class, or if you’re a potential reviewer, shoot me an email and I’ll request a review copy for you.

I didn’t know Fred as well as most of the others who have provided such fine tributes here.  As they have attested, he was a first-rate scholar, an inspiring teaching, and a devoted friend.  From my own experience with him, I can add that he was deliberate about investing in the next generation of market-oriented scholars.  I’m the beneficiary of that investment.

My first encounter with Fred came in 1994, when I was fresh out of college and working as a research fellow at Washington University’s Center for the Study of American Business.  I was trying to assess the common law’s effectiveness at dealing with the externalities that are now addressed through complex environmental statutes and regulations.  My longtime mentor, P.J. Hill, recommended that I call Fred for help.  Fred was happy to drop what he was doing in order to explain to an ignorant 22 year-old how the common law’s property rights-based doctrines could address a great many environmental problems.

After completing law school and a judicial clerkship, I took a one-year Olin Fellowship at Northwestern, where Fred was teaching.  Once again, he took time to help a newbie formulate ideas for articles and structure arguments.  But for the publications I produced at Northwestern, I probably couldn’t have landed a job teaching law.  And without Fred’s help, those publications wouldn’t have been nearly as strong.

A few years ago, Fred invited me to join as co-author of the fifth edition of his excellent antitrust casebook (co-authored with the magnificent Charlie Goetz).  How excited was I!  My initial excitement was over the opportunity to attach my name to two giants in the field.  What I didn’t realize at the time was how much I would learn from Fred and Charlie, both brilliant thinkers and lucid writers.

Fred and Charlie’s casebook continually emphasizes the decision-theoretic approach to antitrust – i.e., the view that antitrust rules and standards should be crafted so as to minimize the sum of error and decision costs.  As I worked on the casebook, my understanding of that regulatory approach deepened.  My recently published book, How to Regulate, extends the approach outside the antitrust context.

But for the experience working with Fred and Charlie on their casebook, I may never have recognized the broad applicability of the error cost approach to regulation, and I may never have completed How to Regulate.

In real life, people don’t get the sort of experience George Bailey had in It’s a Wonderful Life.  We never learn what people would have been like had we not influenced them.  I know for sure, though, that I would not be where I am today without Fred McChesney’s willingness to help me along the way.  I am most grateful.

My new book, How to Regulate: A Guide for Policymakers, will be published in a few weeks.  A while back, I promised a series of posts on the book’s key chapters.  I posted an overview of the book and a description of the book’s chapter on externalities.  I then got busy on another writing project (on horizontal shareholdings—more on that later) and dropped the ball.  Today, I resume my book summary with some thoughts from the book’s chapter on public goods.

With most goods, the owner can keep others from enjoying what she owns, and, if one person enjoys the good, no one else can do so.  Consider your coat or your morning cup of Starbucks.  You can prevent me from wearing your coat or drinking your coffee, and if you choose to let me wear the coat or drink the coffee, it’s not available to anyone else.

There are some amenities, though, that are “non-excludable,” meaning that the owner can’t prevent others from enjoying them, and “non-rivalrous,” meaning that one person’s consumption of them doesn’t prevent others from enjoying them as well.  National defense and local flood control systems (levees, etc.) are like this.  So are more mundane things like public art projects and fireworks displays.  Amenities that are both non-excludable and non-rivalrous are “public goods.”

[NOTE:  Amenities that are either non-excludable or non-rivalrous, but not both, are “quasi-public goods.”  Such goods include excludable but non-rivalrous “club goods” (e.g., satellite radio programming) and non-excludable but rivalrous “commons goods” (e.g., public fisheries).  The public goods chapter of How to Regulate addresses both types of quasi-public goods, but I won’t discuss them here.]

The primary concern with public goods is that they will be underproduced.  That’s because the producer, who must bear all the cost of producing the good, cannot exclude benefit recipients who do not contribute to the good’s production and thus cannot capture many of the benefits of his productive efforts.

Suppose, for example, that a levee would cost $5 million to construct and would create $10 million of benefit by protecting 500 homeowners from expected losses of $20,000 each (i.e., the levee would eliminate a 10% chance of a big flood that would cause each homeowner a $200,000 loss).  To maximize social welfare, the levee should be built.  But no single homeowner has an incentive to build the levee.  At least 250 homeowners would need to combine their resources to make the levee project worthwhile for participants (250 * $20,000 in individual benefit = $5 million), but most homeowners would prefer to hold out and see if their neighbors will finance the levee project without their help.  The upshot is that the levee never gets built, even though its construction is value-enhancing.

Economists have often jumped from the observation that public goods are susceptible to underproduction to the conclusion that the government should tax people and use the revenues to provide public goods.  Consider, for example, this passage from a law school textbook by several renowned economists:

It is apparent that public goods will not be adequately supplied by the private sector. The reason is plain: because people can’t be excluded from using public goods, they can’t be charged money for using them, so a private supplier can’t make money from providing them. … Because public goods are generally not adequately supplied by the private sector, they have to be supplied by the public sector.

[Howell E. Jackson, Louis Kaplow, Steven Shavell, W. Kip Viscusi, & David Cope, Analytical Methods for Lawyers 362-63 (2003) (emphasis added).]

That last claim seems demonstrably false.   Continue Reading…

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

My Office Door

Thom Lambert —  12 November 2015

University professors often post things on their office doors—photos, news clippings, conference posters, political cartoons.   I’ve never been much for that.  The objective, I assume, is to express something about yourself: who you are, what interests you, what values you hold.  I’ve never participated in this custom because I haven’t wanted to alienate students who might not share my views.  That’s not to suggest that I’m shy about those views.  I will—and regularly do—share them with students, even those who I know disagree with me.  But if posting my views on the door were to dissuade students from coming to me to discuss those views (and contrary ones), I would be losing the opportunity to have a meaningful dialogue.  Plus, my tastes veer toward minimalism, and doors covered with postings are ugly. Thus, no postings.

Until today.  My institution, the University of Missouri, is at a crossroads.  We can be a place where ideas—even unpopular ones—are  freely expressed, exchanged, and scrutinized.  Or we can be a place where everyone’s feelings are protected at all times.   It’s one or the other.

Tuesday morning, I opened an email and thought, “What a great prank. It looks so official!”  The email, which was from the MU Police, read as follows:

To continue to ensure that the University of Missouri campus remains safe, the MU Police Department (MUPD) is asking individuals who witness incidents of hateful and/or hurtful speech or actions to:

  • Call the police immediately at 573-882-7201. (If you are in an emergency situation, dial 911.)
  • Give the communications operator a summary of the incident, including location.
  • Provide a detailed description of the individual(s) involved.
  • Provide a license plate and vehicle descriptions (if appropriate).
  • If possible and if it can be done safely, take a photo of the individual(s) with your cell phone.

Delays, including posting information to social media, can often reduce the chances of identifying the responsible parties. While cases of hateful and hurtful speech are not crimes, if the individual(s) identified are students, MU’s Office of Student Conduct can take disciplinary action.

As it turns out, it was no joke.  Anyone on my campus who witnesses “hurtful speech” is directed to call campus police—individuals who carry guns, drive squad cars, and regularly arrest people. Now rest assured, “cases of hateful and hurtful speech are not crimes.”  They can give rise to, at most, “disciplinary action” by the MU Office of Student Conduct.  But still, isn’t it a bit unsettling—chilling, even—to think that if you say something “hurtful” at Mizzou (e.g., gay marriage is an abomination, affirmative action is unfair and hurts those it is ostensibly designed to help, Christians who oppose gay marriage are bigots, Islam is not a religion of peace, white men are privileged in a way that leads to undeserved rewards, culture matters in cultivating success, Republicans are dumb), the police may track you down and you may be required to defend yourself before the student conduct committee?  Perhaps the MU Police, or whoever crafted that email (let’s get real…it wasn’t the police), didn’t really mean that all hurtful speech is potentially problematic.  But if that’s the case, then why did they word the email as they did?  Pandering to an unreasonable element, maybe?

Contrast Mizzou’s approach to that taken by Purdue University.  The day after the Mizzou email, Purdue president Mitch Daniels reminded members of the Purdue community that their school actually stands for both tolerance AND free speech.  Here’s his letter:

Purdue Letter

 

The contrast between Mizzou and Purdue couldn’t be starker.  And it really, really matters.  I hope that posting these two documents on my door (along with this spot-on Wall Street Journal editorial) will not dissuade students from engaging in dialogue with me.  But I can’t be demure on this one.  So I now have—much to my aesthetic chagrin—a decorated office door.  Please come in and talk, even if you think I’m wrong.

 

Unless you live under a rock, you know that the president and chancellor of the University of Missouri, where I teach law, have resigned in response to protests over their failure to respond to several (three identified) racist incidents on campus. A group called Concerned Student 1950 staged a series of protests and demanded, among other things, that our president resign, that he publicly denounce his white privilege, that the university mandate a “comprehensive racial awareness and inclusion curriculum,” and that the percentage of black faculty and staff be increased to 10% by the 2017-18 school year. Last week, a student embarked on a hunger strike until Concerned Student 1950’s demands were met, and students in solidarity moved into a tent village on one of our quads.  Over the weekend, the black members of our football team threatened to boycott play until the president resigned, and our coach, facing four straight losses and little prospect of another victory this season, agreed to support and publicize the boycott.  Yesterday morning, Mizzou faculty supporting the Concerned Student group walked out of their classes and headed to the quad, where faculty and administrators joined protesting students in blocking media access to the tent village in the middle of our public quad. Around 10:00 AM, the president resigned, and protesting students danced on the quad.  Toward the end of the day, the chancellor announced that he will move from his position at the end of the year.

The Mizzou Faculty Council and the administration of the law school have expressed to Mizzou students that the faculty fully supports them.  We faculty members have been encouraged to express that support ourselves.  I want to do that now.

I want to express my support for the students for a couple of reasons.  First, I really love Mizzou.  It’s a special place full of wonderful students.  I visited at the University of Minnesota a  few years back, and I couldn’t wait to get back to Mizzou.  Our students reflect the amazing diversity of our state: inner city kids from St. Louis and Kansas City, kids from the suburbs, kids with southern accents from the bootheel, kids with near-Minnesota accents from the northern part of the state, rich kids from fancy prep schools, poor kids who went to public school in the inner city or farm towns.  Unlike so many public law schools, Mizzou has kept its operating costs and its tuition at reasonable levels, so an education here is really open to just about all qualified students from the state.  (Minnesota’s in-state law school tuition is $41,222; Missouri’s is $19,545.)  We mix everyone together and end up with a wonderful student body.  I simply adore my students.

Second, I want to support students who have been the subject of racist remarks because I, too, have experienced the pain of being mocked, criticized, ridiculed, etc. for who I am.  I was a not-very-athletic gay kid who attended the very traditional and somewhat jockish Fairview Christian Academy.  I followed that up with Wheaton College, Billy Graham’s alma mater.  For most of my formative years, I was continually reminded that I was deficient, flawed, damned.  Express slurs were few and far between (though they occurred), but I was not accepted for who I am.  I know the pain of exclusion, and I want both to provide an empathetic ear to my students who feel excluded and to sound a prophetic voice against those who discriminate.

But I could not really support my Mizzou students in this difficult time if I did not point out a few things.

First–The top administrators of a school of 35,000 people cannot prevent all instances of racism.  Ignorant, mean people are sometimes going to yell slurs from their pick-up trucks when they drive through campus.  Drunken frat boys are occasionally going to say ugly things.  When you ambush the homecoming parade, to which parents have brought their small children for a rah-rah college experience, some people are not going to be nice to you.  Those ambushed may be taken aback and may not say all the right things.  People who draw things with poop are especially hard to control. Be prepared: The people who replace the deposed president and chancellor at Mizzou are unlikely to prevent every racist incident on our campus.

Second–The U.S. Constitution forbids state institutions from employing racial quotas.  Having been involved in hiring at Mizzou for a number of years, I can assure that we bend over backward to fill open positions with qualified minority applicants. It is highly unlikely that Concerned Student 1950’s demand that the percentage of black faculty and staff at Mizzou be raised to 10% by 2017-18 can be implemented in a manner consistent with constitutional obligations.  You should know that.

Third–Free speech means more than the freedom to express views with which you agree.  I honestly think most Mizzou students understand this point, but I’m afraid that the administrator and communications professor in this video don’t grasp it.  Lest you be misled by their ill-advised bullying, you should know that the First Amendment is for everyone.

Fourth–Unreasonable demands have consequences.  We will survive this, but Mizzou has been badly weakened.  I can’t imagine that the press accounts from the last week will help with minority student and faculty recruitment next year.  That’s a shame, because based on my encounters with a great many minority students and professors at Mizzou over the past twelve years, I believe most have had good experiences.  Perhaps they haven’t been honest with me.  Or perhaps the situation has changed in the last couple of years.  If so, I’m terribly sorry to hear that. But, following the events of the last week, I can’t imagine that next year will be better.

Fifth–Regardless of your take on the events of the last week, I hope you will not let bitterness reign in your hearts.  Unlike many of my gay friends from conservative religious backgrounds, I chose years ago not to write off those people who were once unkind to me.  I’m glad I made that choice.  I hope any Mizzou student who is currently feeling marginalized for any reason will keep calm, carry on, give others the benefit of the doubt, and be open to reconciliation.

So, Mizzou students, I support you.  But I will not coddle you.  You’re adults and should be treated as such.

 

Alden Abbott and I recently co-authored an article, forthcoming in the Journal of Competition Law and Economics, in which we examined the degree to which the Supreme Court and the federal enforcement agencies have recognized the inherent limits of antitrust law. We concluded that the Roberts Court has admirably acknowledged those limits and has for the most part crafted liability rules that will maximize antitrust’s social value. The enforcement agencies, by contrast, have largely ignored antitrust’s intrinsic limits. In a number of areas, they have sought to expand antitrust’s reach in ways likely to reduce consumer welfare.

The bright spot in federal antitrust enforcement in the last few years has been Josh Wright. Time and again, he has bucked the antitrust establishment, reminding the mandarins that their goal should not be to stop every instance of anticompetitive behavior but instead to optimize antitrust by minimizing the sum of error costs (from both false negatives and false positives) and decision costs. As Judge Easterbrook famously explained, and as Josh Wright has emphasized more than anyone I know, inevitable mistakes (error costs) and heavy information requirements (decision costs) constrain what antitrust can do. Every liability rule, every defense, every immunity doctrine should be crafted with those limits in mind.

Josh will no doubt be remembered, and justifiably so, for spearheading the effort to provide guidance on how the Federal Trade Commission will exercise its amorphous authority to police “unfair methods of competition.” Several others have lauded Josh’s fine contribution on that matter (as have I), so I won’t gild that lily here. Instead, let me briefly highlight two other areas in which Josh has properly pushed for a recognition of antitrust’s inherent limits.

Vertical Restraints

Vertical restraints—both intrabrand restraints like resale price maintenance (RPM) and interbrand restraints like exclusive dealing—are a competitive mixed bag. Under certain conditions, such restraints may reduce overall market output, causing anticompetitive harm. Under other, more commonly occurring conditions, vertical restraints may enhance market output. Empirical evidence suggests that most vertical restraints are output-enhancing rather than output-reducing. Enforcers taking an optimizing, limits of antitrust approach will therefore exercise caution in condemning or discouraging vertical restraints.

That’s exactly what Josh Wright has done. In an early post-Leegin RPM order predating Josh’s tenure, the FTC endorsed a liability rule that placed an inappropriately heavy burden on RPM defendants. Josh later laid the groundwork for correcting that mistake, advocating a much more evidence-based (and defendant-friendly) RPM rule. In the McWane case, the Commission condemned an exclusive dealing arrangement that had been in place for long enough to cause anticompetitive harm but hadn’t done so. Josh rightly called out the majority for elevating theoretical harm over actual market evidence. (Adopting a highly deferential stance, the Eleventh Circuit affirmed the Commission majority, but Josh was right to criticize the majority’s implicit hostility toward exclusive dealing.) In settling the Graco case, the Commission again went beyond the evidence, requiring the defendant to cease exclusive dealing and to stop giving loyalty rebates even though there was no evidence that either sort of vertical restraint contributed to the anticompetitive harm giving rise to the action at issue. Josh rightly took the Commission to task for reflexively treating vertical restraints as suspect when they’re usually procompetitive and had an obvious procompetitive justification (avoidance of interbrand free-riding) in the case at hand.

Horizontal Mergers

Horizontal mergers, like vertical restraints, are competitive mixed bags. Any particular merger of competitors may impose some consumer harm by reducing the competition facing the merged firm. The same merger, though, may provide some consumer benefit by lowering the merged firm’s costs and thereby allowing it to compete more vigorously (most notably, by lowering its prices). A merger policy committed to minimizing the consumer welfare losses from unwarranted condemnations of net beneficial mergers and improper acquittals of net harmful ones would afford equal treatment to claims of anticompetitive harm and procompetitive benefit, requiring each to be established by the same quantum of proof.

The federal enforcement agencies’ new Horizontal Merger Guidelines, however, may put a thumb on the scale, tilting the balance toward a finding of anticompetitive harm. The Guidelines make it easier for the agencies to establish likely anticompetitive harm. Enforcers may now avoid defining a market if they point to adverse unilateral effects using the gross upward pricing pressure index (GUPPI). The merging parties, by contrast, bear a heavy burden when they seek to show that their contemplated merger will occasion efficiencies. They must: (1) prove that any claimed efficiencies are “merger-specific” (i.e., incapable of being achieved absent the merger); (2) “substantiate” asserted efficiencies; and (3) show that such efficiencies will result in the very markets in which the agencies have established likely anticompetitive effects.

In an important dissent (Ardagh), Josh observed that the agencies’ practice has evolved such that there are asymmetric burdens in establishing competitive effects, and he cautioned that this asymmetry will enhance error costs. (Geoff praised that dissent here.) In another dissent (Family Dollar/Dollar Tree), Josh acknowledged some potential problems with the promising but empirically unverified GUPPI, and he wisely advocated the creation of safe harbors for mergers generating very low GUPPI scores. (I praised that dissent here.)

I could go on and on, but these examples suffice to illustrate what has been, in my opinion, Josh’s most important contribution as an FTC commissioner: his constant effort to strengthen antitrust’s effectiveness by acknowledging its inevitable and inexorable limits. Coming on the heels of the FTC’s and DOJ’s rejection of the Section 2 Report—a document that was highly attuned to antitrust’s limits—Josh was just what antitrust needed.