Archives For

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

My new book, How to Regulate: A Guide for Policymakers, is now available on Amazon.  Inform Santa!

The book, published by Cambridge University Press, attempts to fill what I think is a huge hole in legal education:  It focuses on the substance of regulation and sets forth principles for designing regulatory approaches that will maximize social welfare.

Lawyers and law professors obsess over process.  (If you doubt that, sit in on a law school faculty meeting sometime!) That obsession may be appropriate; process often determines substance.  Rarely, though, do lawyers receive training in how to design the substance of a rule or standard to address some welfare-reducing defect in private ordering.  That’s a shame, because lawyers frequently take the lead in crafting regulatory approaches.  They need to understand (1) why the unfortunate situation is occurring, (2) what options are available for addressing it, and (3) what are the downsides to each of the options.

Economists, of course, study those things.  But economists have their own blind spots.  Being unfamiliar with legal and regulatory processes, they often fail to comprehend how (1) government officials’ informational constraints and (2) special interests’ tendency to manipulate government power for private ends can impair a regulatory approach’s success.  (Economists affiliated with the Austrian and Public Choice schools are more attuned to those matters, but their insights are often ignored by the economists advising on regulatory approaches — see, e.g., the fine work of the Affordable Care Act architects.)

Enter How to Regulate.  The book endeavors to provide economic training to the lawyers writing rules and a sense of the “limits of law” to the economists advising them.

The book begins by setting forth an overarching goal for regulation (minimize the sum of error and decision costs) and a general plan for achieving that goal (think like a physician–identify the adverse symptom, diagnose the disease, consider the range of available remedies, and assess the side effects of each).  It then marches through six major bases for regulating: externalities, public goods, market power, information asymmetry, agency costs, and the cognitive and volitional quirks observed by behavioral economists.  For each of those bases for regulation, the book considers the symptoms that might justify a regulatory approach, the disease causing those symptoms (i.e., the underlying economics), the range of available remedies (the policy tools available), and the side effects of each (e.g., public choice concerns, mistakes from knowledge limitations).

I have been teaching How to Regulate this semester, and it’s been a blast.  Unfortunately, all of my students are in their last year of law school.  The book would be most meaningful, I think, to an upcoming second-year student.  It really lays out the basis for a number of areas of law beyond the common law:  environmental law, antitrust, corporate law, securities regulation, food labeling laws, consumer protection statutes, etc.

I was heartened to receive endorsements from a couple of very fine thinkers on regulation, both of whom have headed the Office of Information and Regulatory Affairs (the White House’s chief regulatory review body).  They also happen to occupy different spots on the ideological spectrum.

Judge Douglas Ginsburg of the D.C. Circuit wrote that the book “will be valuable for all policy wonks, not just policymakers.  It provides an organized and rigorous framework for analyzing whether and how inevitably imperfect regulation is likely to improve upon inevitably imperfect market outcomes.”

Harvard Law School’s Cass Sunstein wrote:  “This may well be the best guide, ever, to the regulatory state.  It’s brilliant, sharp, witty, and even-handed — and it’s so full of insights that it counts as a major contribution to both theory and practice.  Indispensable reading for policymakers all over the world, and also for teachers, students, and all those interested in what the shouting is really about.”

Bottom line:  There’s something for everybody in this book.  I wrote it because I think the ideas are important and under-studied.  And I really tried to make it as accessible (and occasionally funny!) as possible.

If you’re a professor and would be interested in a review copy for potential use in a class, or if you’re a potential reviewer, shoot me an email and I’ll request a review copy for you.

I didn’t know Fred as well as most of the others who have provided such fine tributes here.  As they have attested, he was a first-rate scholar, an inspiring teaching, and a devoted friend.  From my own experience with him, I can add that he was deliberate about investing in the next generation of market-oriented scholars.  I’m the beneficiary of that investment.

My first encounter with Fred came in 1994, when I was fresh out of college and working as a research fellow at Washington University’s Center for the Study of American Business.  I was trying to assess the common law’s effectiveness at dealing with the externalities that are now addressed through complex environmental statutes and regulations.  My longtime mentor, P.J. Hill, recommended that I call Fred for help.  Fred was happy to drop what he was doing in order to explain to an ignorant 22 year-old how the common law’s property rights-based doctrines could address a great many environmental problems.

After completing law school and a judicial clerkship, I took a one-year Olin Fellowship at Northwestern, where Fred was teaching.  Once again, he took time to help a newbie formulate ideas for articles and structure arguments.  But for the publications I produced at Northwestern, I probably couldn’t have landed a job teaching law.  And without Fred’s help, those publications wouldn’t have been nearly as strong.

A few years ago, Fred invited me to join as co-author of the fifth edition of his excellent antitrust casebook (co-authored with the magnificent Charlie Goetz).  How excited was I!  My initial excitement was over the opportunity to attach my name to two giants in the field.  What I didn’t realize at the time was how much I would learn from Fred and Charlie, both brilliant thinkers and lucid writers.

Fred and Charlie’s casebook continually emphasizes the decision-theoretic approach to antitrust – i.e., the view that antitrust rules and standards should be crafted so as to minimize the sum of error and decision costs.  As I worked on the casebook, my understanding of that regulatory approach deepened.  My recently published book, How to Regulate, extends the approach outside the antitrust context.

But for the experience working with Fred and Charlie on their casebook, I may never have recognized the broad applicability of the error cost approach to regulation, and I may never have completed How to Regulate.

In real life, people don’t get the sort of experience George Bailey had in It’s a Wonderful Life.  We never learn what people would have been like had we not influenced them.  I know for sure, though, that I would not be where I am today without Fred McChesney’s willingness to help me along the way.  I am most grateful.

My new book, How to Regulate: A Guide for Policymakers, will be published in a few weeks.  A while back, I promised a series of posts on the book’s key chapters.  I posted an overview of the book and a description of the book’s chapter on externalities.  I then got busy on another writing project (on horizontal shareholdings—more on that later) and dropped the ball.  Today, I resume my book summary with some thoughts from the book’s chapter on public goods.

With most goods, the owner can keep others from enjoying what she owns, and, if one person enjoys the good, no one else can do so.  Consider your coat or your morning cup of Starbucks.  You can prevent me from wearing your coat or drinking your coffee, and if you choose to let me wear the coat or drink the coffee, it’s not available to anyone else.

There are some amenities, though, that are “non-excludable,” meaning that the owner can’t prevent others from enjoying them, and “non-rivalrous,” meaning that one person’s consumption of them doesn’t prevent others from enjoying them as well.  National defense and local flood control systems (levees, etc.) are like this.  So are more mundane things like public art projects and fireworks displays.  Amenities that are both non-excludable and non-rivalrous are “public goods.”

[NOTE:  Amenities that are either non-excludable or non-rivalrous, but not both, are “quasi-public goods.”  Such goods include excludable but non-rivalrous “club goods” (e.g., satellite radio programming) and non-excludable but rivalrous “commons goods” (e.g., public fisheries).  The public goods chapter of How to Regulate addresses both types of quasi-public goods, but I won’t discuss them here.]

The primary concern with public goods is that they will be underproduced.  That’s because the producer, who must bear all the cost of producing the good, cannot exclude benefit recipients who do not contribute to the good’s production and thus cannot capture many of the benefits of his productive efforts.

Suppose, for example, that a levee would cost $5 million to construct and would create $10 million of benefit by protecting 500 homeowners from expected losses of $20,000 each (i.e., the levee would eliminate a 10% chance of a big flood that would cause each homeowner a $200,000 loss).  To maximize social welfare, the levee should be built.  But no single homeowner has an incentive to build the levee.  At least 250 homeowners would need to combine their resources to make the levee project worthwhile for participants (250 * $20,000 in individual benefit = $5 million), but most homeowners would prefer to hold out and see if their neighbors will finance the levee project without their help.  The upshot is that the levee never gets built, even though its construction is value-enhancing.

Economists have often jumped from the observation that public goods are susceptible to underproduction to the conclusion that the government should tax people and use the revenues to provide public goods.  Consider, for example, this passage from a law school textbook by several renowned economists:

It is apparent that public goods will not be adequately supplied by the private sector. The reason is plain: because people can’t be excluded from using public goods, they can’t be charged money for using them, so a private supplier can’t make money from providing them. … Because public goods are generally not adequately supplied by the private sector, they have to be supplied by the public sector.

[Howell E. Jackson, Louis Kaplow, Steven Shavell, W. Kip Viscusi, & David Cope, Analytical Methods for Lawyers 362-63 (2003) (emphasis added).]

That last claim seems demonstrably false.   Continue Reading…

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

My Office Door

Thom Lambert —  12 November 2015

University professors often post things on their office doors—photos, news clippings, conference posters, political cartoons.   I’ve never been much for that.  The objective, I assume, is to express something about yourself: who you are, what interests you, what values you hold.  I’ve never participated in this custom because I haven’t wanted to alienate students who might not share my views.  That’s not to suggest that I’m shy about those views.  I will—and regularly do—share them with students, even those who I know disagree with me.  But if posting my views on the door were to dissuade students from coming to me to discuss those views (and contrary ones), I would be losing the opportunity to have a meaningful dialogue.  Plus, my tastes veer toward minimalism, and doors covered with postings are ugly. Thus, no postings.

Until today.  My institution, the University of Missouri, is at a crossroads.  We can be a place where ideas—even unpopular ones—are  freely expressed, exchanged, and scrutinized.  Or we can be a place where everyone’s feelings are protected at all times.   It’s one or the other.

Tuesday morning, I opened an email and thought, “What a great prank. It looks so official!”  The email, which was from the MU Police, read as follows:

To continue to ensure that the University of Missouri campus remains safe, the MU Police Department (MUPD) is asking individuals who witness incidents of hateful and/or hurtful speech or actions to:

  • Call the police immediately at 573-882-7201. (If you are in an emergency situation, dial 911.)
  • Give the communications operator a summary of the incident, including location.
  • Provide a detailed description of the individual(s) involved.
  • Provide a license plate and vehicle descriptions (if appropriate).
  • If possible and if it can be done safely, take a photo of the individual(s) with your cell phone.

Delays, including posting information to social media, can often reduce the chances of identifying the responsible parties. While cases of hateful and hurtful speech are not crimes, if the individual(s) identified are students, MU’s Office of Student Conduct can take disciplinary action.

As it turns out, it was no joke.  Anyone on my campus who witnesses “hurtful speech” is directed to call campus police—individuals who carry guns, drive squad cars, and regularly arrest people. Now rest assured, “cases of hateful and hurtful speech are not crimes.”  They can give rise to, at most, “disciplinary action” by the MU Office of Student Conduct.  But still, isn’t it a bit unsettling—chilling, even—to think that if you say something “hurtful” at Mizzou (e.g., gay marriage is an abomination, affirmative action is unfair and hurts those it is ostensibly designed to help, Christians who oppose gay marriage are bigots, Islam is not a religion of peace, white men are privileged in a way that leads to undeserved rewards, culture matters in cultivating success, Republicans are dumb), the police may track you down and you may be required to defend yourself before the student conduct committee?  Perhaps the MU Police, or whoever crafted that email (let’s get real…it wasn’t the police), didn’t really mean that all hurtful speech is potentially problematic.  But if that’s the case, then why did they word the email as they did?  Pandering to an unreasonable element, maybe?

Contrast Mizzou’s approach to that taken by Purdue University.  The day after the Mizzou email, Purdue president Mitch Daniels reminded members of the Purdue community that their school actually stands for both tolerance AND free speech.  Here’s his letter:

Purdue Letter

 

The contrast between Mizzou and Purdue couldn’t be starker.  And it really, really matters.  I hope that posting these two documents on my door (along with this spot-on Wall Street Journal editorial) will not dissuade students from engaging in dialogue with me.  But I can’t be demure on this one.  So I now have—much to my aesthetic chagrin—a decorated office door.  Please come in and talk, even if you think I’m wrong.

 

Unless you live under a rock, you know that the president and chancellor of the University of Missouri, where I teach law, have resigned in response to protests over their failure to respond to several (three identified) racist incidents on campus. A group called Concerned Student 1950 staged a series of protests and demanded, among other things, that our president resign, that he publicly denounce his white privilege, that the university mandate a “comprehensive racial awareness and inclusion curriculum,” and that the percentage of black faculty and staff be increased to 10% by the 2017-18 school year. Last week, a student embarked on a hunger strike until Concerned Student 1950’s demands were met, and students in solidarity moved into a tent village on one of our quads.  Over the weekend, the black members of our football team threatened to boycott play until the president resigned, and our coach, facing four straight losses and little prospect of another victory this season, agreed to support and publicize the boycott.  Yesterday morning, Mizzou faculty supporting the Concerned Student group walked out of their classes and headed to the quad, where faculty and administrators joined protesting students in blocking media access to the tent village in the middle of our public quad. Around 10:00 AM, the president resigned, and protesting students danced on the quad.  Toward the end of the day, the chancellor announced that he will move from his position at the end of the year.

The Mizzou Faculty Council and the administration of the law school have expressed to Mizzou students that the faculty fully supports them.  We faculty members have been encouraged to express that support ourselves.  I want to do that now.

I want to express my support for the students for a couple of reasons.  First, I really love Mizzou.  It’s a special place full of wonderful students.  I visited at the University of Minnesota a  few years back, and I couldn’t wait to get back to Mizzou.  Our students reflect the amazing diversity of our state: inner city kids from St. Louis and Kansas City, kids from the suburbs, kids with southern accents from the bootheel, kids with near-Minnesota accents from the northern part of the state, rich kids from fancy prep schools, poor kids who went to public school in the inner city or farm towns.  Unlike so many public law schools, Mizzou has kept its operating costs and its tuition at reasonable levels, so an education here is really open to just about all qualified students from the state.  (Minnesota’s in-state law school tuition is $41,222; Missouri’s is $19,545.)  We mix everyone together and end up with a wonderful student body.  I simply adore my students.

Second, I want to support students who have been the subject of racist remarks because I, too, have experienced the pain of being mocked, criticized, ridiculed, etc. for who I am.  I was a not-very-athletic gay kid who attended the very traditional and somewhat jockish Fairview Christian Academy.  I followed that up with Wheaton College, Billy Graham’s alma mater.  For most of my formative years, I was continually reminded that I was deficient, flawed, damned.  Express slurs were few and far between (though they occurred), but I was not accepted for who I am.  I know the pain of exclusion, and I want both to provide an empathetic ear to my students who feel excluded and to sound a prophetic voice against those who discriminate.

But I could not really support my Mizzou students in this difficult time if I did not point out a few things.

First–The top administrators of a school of 35,000 people cannot prevent all instances of racism.  Ignorant, mean people are sometimes going to yell slurs from their pick-up trucks when they drive through campus.  Drunken frat boys are occasionally going to say ugly things.  When you ambush the homecoming parade, to which parents have brought their small children for a rah-rah college experience, some people are not going to be nice to you.  Those ambushed may be taken aback and may not say all the right things.  People who draw things with poop are especially hard to control. Be prepared: The people who replace the deposed president and chancellor at Mizzou are unlikely to prevent every racist incident on our campus.

Second–The U.S. Constitution forbids state institutions from employing racial quotas.  Having been involved in hiring at Mizzou for a number of years, I can assure that we bend over backward to fill open positions with qualified minority applicants. It is highly unlikely that Concerned Student 1950’s demand that the percentage of black faculty and staff at Mizzou be raised to 10% by 2017-18 can be implemented in a manner consistent with constitutional obligations.  You should know that.

Third–Free speech means more than the freedom to express views with which you agree.  I honestly think most Mizzou students understand this point, but I’m afraid that the administrator and communications professor in this video don’t grasp it.  Lest you be misled by their ill-advised bullying, you should know that the First Amendment is for everyone.

Fourth–Unreasonable demands have consequences.  We will survive this, but Mizzou has been badly weakened.  I can’t imagine that the press accounts from the last week will help with minority student and faculty recruitment next year.  That’s a shame, because based on my encounters with a great many minority students and professors at Mizzou over the past twelve years, I believe most have had good experiences.  Perhaps they haven’t been honest with me.  Or perhaps the situation has changed in the last couple of years.  If so, I’m terribly sorry to hear that. But, following the events of the last week, I can’t imagine that next year will be better.

Fifth–Regardless of your take on the events of the last week, I hope you will not let bitterness reign in your hearts.  Unlike many of my gay friends from conservative religious backgrounds, I chose years ago not to write off those people who were once unkind to me.  I’m glad I made that choice.  I hope any Mizzou student who is currently feeling marginalized for any reason will keep calm, carry on, give others the benefit of the doubt, and be open to reconciliation.

So, Mizzou students, I support you.  But I will not coddle you.  You’re adults and should be treated as such.

 

Alden Abbott and I recently co-authored an article, forthcoming in the Journal of Competition Law and Economics, in which we examined the degree to which the Supreme Court and the federal enforcement agencies have recognized the inherent limits of antitrust law. We concluded that the Roberts Court has admirably acknowledged those limits and has for the most part crafted liability rules that will maximize antitrust’s social value. The enforcement agencies, by contrast, have largely ignored antitrust’s intrinsic limits. In a number of areas, they have sought to expand antitrust’s reach in ways likely to reduce consumer welfare.

The bright spot in federal antitrust enforcement in the last few years has been Josh Wright. Time and again, he has bucked the antitrust establishment, reminding the mandarins that their goal should not be to stop every instance of anticompetitive behavior but instead to optimize antitrust by minimizing the sum of error costs (from both false negatives and false positives) and decision costs. As Judge Easterbrook famously explained, and as Josh Wright has emphasized more than anyone I know, inevitable mistakes (error costs) and heavy information requirements (decision costs) constrain what antitrust can do. Every liability rule, every defense, every immunity doctrine should be crafted with those limits in mind.

Josh will no doubt be remembered, and justifiably so, for spearheading the effort to provide guidance on how the Federal Trade Commission will exercise its amorphous authority to police “unfair methods of competition.” Several others have lauded Josh’s fine contribution on that matter (as have I), so I won’t gild that lily here. Instead, let me briefly highlight two other areas in which Josh has properly pushed for a recognition of antitrust’s inherent limits.

Vertical Restraints

Vertical restraints—both intrabrand restraints like resale price maintenance (RPM) and interbrand restraints like exclusive dealing—are a competitive mixed bag. Under certain conditions, such restraints may reduce overall market output, causing anticompetitive harm. Under other, more commonly occurring conditions, vertical restraints may enhance market output. Empirical evidence suggests that most vertical restraints are output-enhancing rather than output-reducing. Enforcers taking an optimizing, limits of antitrust approach will therefore exercise caution in condemning or discouraging vertical restraints.

That’s exactly what Josh Wright has done. In an early post-Leegin RPM order predating Josh’s tenure, the FTC endorsed a liability rule that placed an inappropriately heavy burden on RPM defendants. Josh later laid the groundwork for correcting that mistake, advocating a much more evidence-based (and defendant-friendly) RPM rule. In the McWane case, the Commission condemned an exclusive dealing arrangement that had been in place for long enough to cause anticompetitive harm but hadn’t done so. Josh rightly called out the majority for elevating theoretical harm over actual market evidence. (Adopting a highly deferential stance, the Eleventh Circuit affirmed the Commission majority, but Josh was right to criticize the majority’s implicit hostility toward exclusive dealing.) In settling the Graco case, the Commission again went beyond the evidence, requiring the defendant to cease exclusive dealing and to stop giving loyalty rebates even though there was no evidence that either sort of vertical restraint contributed to the anticompetitive harm giving rise to the action at issue. Josh rightly took the Commission to task for reflexively treating vertical restraints as suspect when they’re usually procompetitive and had an obvious procompetitive justification (avoidance of interbrand free-riding) in the case at hand.

Horizontal Mergers

Horizontal mergers, like vertical restraints, are competitive mixed bags. Any particular merger of competitors may impose some consumer harm by reducing the competition facing the merged firm. The same merger, though, may provide some consumer benefit by lowering the merged firm’s costs and thereby allowing it to compete more vigorously (most notably, by lowering its prices). A merger policy committed to minimizing the consumer welfare losses from unwarranted condemnations of net beneficial mergers and improper acquittals of net harmful ones would afford equal treatment to claims of anticompetitive harm and procompetitive benefit, requiring each to be established by the same quantum of proof.

The federal enforcement agencies’ new Horizontal Merger Guidelines, however, may put a thumb on the scale, tilting the balance toward a finding of anticompetitive harm. The Guidelines make it easier for the agencies to establish likely anticompetitive harm. Enforcers may now avoid defining a market if they point to adverse unilateral effects using the gross upward pricing pressure index (GUPPI). The merging parties, by contrast, bear a heavy burden when they seek to show that their contemplated merger will occasion efficiencies. They must: (1) prove that any claimed efficiencies are “merger-specific” (i.e., incapable of being achieved absent the merger); (2) “substantiate” asserted efficiencies; and (3) show that such efficiencies will result in the very markets in which the agencies have established likely anticompetitive effects.

In an important dissent (Ardagh), Josh observed that the agencies’ practice has evolved such that there are asymmetric burdens in establishing competitive effects, and he cautioned that this asymmetry will enhance error costs. (Geoff praised that dissent here.) In another dissent (Family Dollar/Dollar Tree), Josh acknowledged some potential problems with the promising but empirically unverified GUPPI, and he wisely advocated the creation of safe harbors for mergers generating very low GUPPI scores. (I praised that dissent here.)

I could go on and on, but these examples suffice to illustrate what has been, in my opinion, Josh’s most important contribution as an FTC commissioner: his constant effort to strengthen antitrust’s effectiveness by acknowledging its inevitable and inexorable limits. Coming on the heels of the FTC’s and DOJ’s rejection of the Section 2 Report—a document that was highly attuned to antitrust’s limits—Josh was just what antitrust needed.

FTC Commissioner Josh Wright has some wise thoughts on how to handle a small GUPPI. I don’t mean the fish. Dissenting in part in the Commission’s disposition of the Family Dollar/Dollar Tree merger, Commissioner Wright calls for creating a safe harbor for mergers where the competitive concern is unilateral effects and the merger generates a low score on the “Gross Upward Pricing Pressure Index,” or “GUPPI.”

Before explaining why Wright is right on this one, some quick background on the GUPPI. In 2010, the DOJ and FTC revised their Horizontal Merger Guidelines to reflect better the actual practices the agencies follow in conducting pre-merger investigations. Perhaps the most notable new emphasis in the revised guidelines was a move away from market definition, the traditional starting point for merger analysis, and toward consideration of potentially adverse “unilateral” effects—i.e., anticompetitive harms that, unlike collusion or even non-collusive oligopolistic pricing, need not involve participation of any non-merging firms in the market. The primary unilateral effect emphasized by the new guidelines is that the merger may put “upward pricing pressure” on brand-differentiated but otherwise similar products sold by the merging firms. The guidelines maintain that when upward pricing pressure seems significant, it may be unnecessary to define the relevant market before concluding that an anticompetitive effect is likely.

The logic of upward pricing pressure is straightforward. Suppose five firms sell competing products (Products A-E) that, while largely substitutable, are differentiated by brand. Given the brand differentiation, some of the products are closer substitutes than others. If the closest substitute to Product A is Product B and vice-versa, then a merger between Producer A and Producer B may result in higher prices even if the remaining producers (C, D, and E) neither raise their prices nor reduce their output. The merged firm will know that if it raises the price of Product A, most of the lost sales will be diverted to Product B, which that firm also produces. Similarly, sales diverted from Product B will largely flow to Product A. Thus, the merged company, seeking to maximize its profits, may face pressure to raise the prices of Products A and/or B.

The GUPPI seeks to assess the likelihood, absent countervailing efficiencies, that the merged firm (e.g., Producer A combined with Producer B) would raise the price of one of its competing products (e.g., Product A), causing some of the lost sales on that product to be diverted to its substitute (e.g., Product B). The GUPPI on Product A would thus consist of:

The Value of Sales Diverted to Product B
Foregone Revenues on Lost Product A Sales.

The value of sales diverted to Product B, the numerator, is equal to the number of units diverted from Product A to Product B times the profit margin (price minus marginal cost) on Product B. The foregone revenues on lost Product A sales, the denominator, is equal to the number of lost Product A sales times the price of Product A. Thus, the fraction set forth above is equal to:

Number of A Sales Diverted to B * Unit Margin on B
Number of A Sales Lost * Price of A.

The Guidelines do not specify how high the GUPPI for a particular product must be before competitive concerns are raised, but they do suggest that at some point, the GUPPI is so small that adverse unilateral effects are unlikely. (“If the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.”) Consistent with this observation, DOJ’s Antitrust Division has concluded that a GUPPI of less than 5% will not give rise to a merger challenge.

Commissioner Wright has split with his fellow commissioners over whether the FTC should similarly adopt a safe harbor for horizontal mergers where the adverse competitive concern is unilateral effects and the GUPPIs are less than 5%. Of the 330 markets in which the Commission is requiring divestiture of stores, 27 involve GUPPIs of less than 5%. Commissioner Wright’s position is that the combinations in those markets should be deemed to fall within a safe harbor. At the very least, he says, there should be some safe harbor for very small GUPPIs, even if it kicks in somewhere below the 5% level. The Commission has taken the position that there should be no safe harbor for mergers where the competitive concern is unilateral effects, no matter how low the GUPPI. Instead, the Commission majority says, GUPPI is just a starting point; once the GUPPIs are calculated, each market should be assessed in light of qualitative factors, and a gestalt-like, “all things considered” determination should be made.

The Commission majority purports to have taken this approach in the Family Dollar/Dollar Tree case. It claims that having used GUPPI to identify some markets that were presumptively troubling (markets where GUPPIs were above a certain level) and others that were presumptively not troubling (low-GUPPI markets), it went back and considered qualitative evidence for each, allowing the presumption to be rebutted where appropriate. As Commissioner Wright observes, though, the actual outcome of this purported process is curious: almost none of the “presumptively anticompetitive” markets were cleared based on qualitative evidence, whereas 27 of the “presumptively competitive” markets were slated for a divestiture despite the low GUPPI. In practice, the Commission seems to be using high GUPPIs to condemn unilateral effects mergers, while not allowing low GUPPIs to acquit them. Wright, by contrast, contends that a low-enough GUPPI should be sufficient to acquit a merger where the only plausible competitive concern is adverse unilateral effects.

He’s right on this, for at least five reasons.

  1. Virtually every merger involves a positive GUPPI. As long as any sales would be diverted from one merging firm to the other and the firms are pricing above cost (so that there is some profit margin on their products), a merger will involve a positive GUPPI. (Recall that the numerator in the GUPPI is “number of diverted sales * profit margin on the product to which sales are diverted.”) If qualitative evidence must be considered and a gestalt-like decision made in even low-GUPPI cases, then that’s the approach that will always be taken and GUPPI data will be essentially irrelevant.
  2. Calculating GUPPIs is hard. Figuring the GUPPI requires the agencies to make some difficult determinations. Calculating the “diversion ratio” (the percentage of lost A sales that are diverted to B when the price of A is raised) requires determinations of A’s “own-price elasticity of demand” as well as the “cross-price elasticity of demand” between A and B. Calculating the profit margin on B requires determining B’s marginal cost. Assessing elasticity of demand and marginal cost is notoriously difficult. This difficulty matters here for a couple of reasons:
    • First, why go through the difficult task of calculating GUPPIs if they won’t simplify the process of evaluating a merger? Under the Commission’s purported approach, once GUPPI is calculated, enforcers still have to consider all the other evidence and make an “all things considered” judgment. A better approach would be to cut off the additional analysis if the GUPPI is sufficiently small.
    • Second, given the difficulty of assessing marginal cost (which is necessary to determine the profit margin on the product to which sales are diverted), enforcers are likely to use a proxy, and the most commonly used proxy for marginal cost is average variable cost (i.e., the total non-fixed costs of producing the products at issue divided by the number of units produced). Average variable cost, though, tends to be smaller than marginal cost over the relevant range of output, which will cause the profit margin (price – “marginal” cost) on the product to which sales are diverted to appear higher than it actually is. And that will tend to overstate the GUPPI. Thus, at some point, a positive but low GUPPI should be deemed insignificant.
  3. The GUPPI is biased toward an indication of anticompetitive effect. GUPPI attempts to assess gross upward pricing pressure. It takes no account of factors that tend to prevent prices from rising. In particular, it ignores entry and repositioning by other product-differentiated firms, factors that constrain the merged firm’s ability to raise prices. It also ignores merger-induced efficiencies, which tend to put downward pressure on the merged firm’s prices. (Granted, the merger guidelines call for these factors to be considered eventually, but the factors are generally subject to higher proof standards. Efficiencies, in particular, are pretty difficulty to establish under the guidelines.) The upshot is that the GUPPI is inherently biased toward an indication of anticompetitive harm. A safe harbor for mergers involving low GUPPIs would help counter-balance this built-in bias.
  4. Divergence from DOJ’s approach will create an arbitrary result. The FTC and DOJ’s Antitrust Division share responsibility for assessing proposed mergers. Having the two enforcement agencies use different standards in their evaluations injects a measure of arbitrariness into the law. In the interest of consistency, predictability, and other basic rule of law values, the agencies should get on the same page. (And, for reasons set forth above, DOJ’s is the better one.)
  5. A safe harbor is consistent with the Supreme Court’s decision-theoretic antitrust jurisprudence. In recent years, the Supreme Court has generally crafted antitrust rules to optimize the costs of errors and of making liability judgments (or, put differently, to “minimize the sum of error and decision costs”). On a number of occasions, the Court has explicitly observed that it is better to adopt a rule that will allow the occasional false acquittal if doing so will prevent greater costs from false convictions and administration. The Brooke Group rule that there can be no predatory pricing liability absent below-cost pricing, for example, is expressly not premised on the belief that low, but above-cost, pricing can never be anticompetitive; rather, the rule is justified on the ground that the false negatives it allows are less costly than the false positives and administrative difficulties a more “theoretically perfect” rule would generate. Indeed, the Supreme Court’s antitrust jurisprudence seems to have wholeheartedly endorsed Voltaire’s prudent aphorism, “The perfect is the enemy of the good.” It is thus no answer for the Commission to observe that adverse unilateral effects can sometimes occur when a combination involves a low (<5%) GUPPI. Low but above-cost pricing can sometimes be anticompetitive, but Brooke Group’s safe harbor is sensible and representative of the approach the Supreme Court thinks antitrust should take. The FTC should get on board.

One final point. It is important to note that Commissioner Wright is not saying—and would be wrong to say—that a high GUPPI should be sufficient to condemn a merger. The GUPPI has never been empirically verified as a means of identifying anticompetitive mergers. As Dennis Carlton observed, “[T]he use of UPP as a merger screen is untested; to my knowledge, there has been no empirical analysis that has been performed to validate its predictive value in assessing the competitive effects of mergers.” Dennis W. Carlton, Revising the Horizontal Merger Guidelines, 10 J. Competition L. & Econ. 1, 24 (2010). This dearth of empirical evidence seems especially problematic in light of the enforcement agencies’ spotty track record in predicting the effects of mergers. Craig Peters, for example, found that the agencies’ merger simulations produced wildly inaccurate predictions about the price effects of airline mergers. See Craig Peters, Evaluating the Performance of Merger Simulation: Evidence from the U.S. Airline Industry, 49 J.L. & Econ. 627 (2006). Professor Carlton thus warns (Carlton, supra, at 32):

UPP is effectively a simplified version of merger simulation. As such, Peters’s findings tell a cautionary tale—more such studies should be conducted before one treats UPP, or any other potential merger review method, as a consistently reliable methodology by which to identify anticompetitive mergers.

The Commission majority claims to agree that a high GUPPI alone should be insufficient to condemn a merger. But the actual outcome of the analysis in the case at hand—i.e., finding almost all combinations involving high GUPPIs to be anticompetitive, while deeming the procompetitive presumption to be rebutted in 27 low-GUPPI cases—suggests that the Commission is really allowing high GUPPIs to “prove” that anticompetitive harm is likely.

The point of dispute between Wright and the other commissioners, though, is about how to handle low GUPPIs. On that question, the Commission should either join the DOJ in recognizing a safe harbor for low-GUPPI mergers or play it straight with the public and delete the Horizontal Merger Guidelines’ observation that “[i]f the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.” The better approach would be to affirm the Guidelines and recognize a safe harbor.

Anybody who has spent much time with children knows how squishy a concept “unfairness” can be.  One can hear the exchange, “He’s not being fair!” “No, she’s not!,” only so many times before coming to understand that unfairness is largely in the eye of the beholder.

Perhaps it’s unfortunate, then, that Congress chose a century ago to cast the Federal Trade Commission’s authority in terms of preventing “unfair methods of competition.”  But that’s what it did, and the question now is whether there is some way to mitigate this “eye of the beholder” problem.

There is.

We know that any business practice that violates the substantive antitrust laws (the Sherman and Clayton Acts) is an unfair method of competition, so we can look to Sherman and Clayton Act precedents to assess the “unfairness” of business practices that those laws reach.  But what about the Commission’s so-called “standalone” UMC authority—its power to prevent business practices that seem to impact competition unfairly but are not technically violations of the substantive antitrust laws?

Almost two years ago, Commissioner Josh Wright recognized that if the FTC’s standalone UMC authority is to play a meaningful role in assuring market competition, the Commission should issue guidelines on what constitutes an unfair method of competition. He was right.  The Commission, you see, really has only four options with respect to standalone Section 5 claims:

  1. It could bring standalone actions based on current commissioners’ considered judgments about what constitutes unfairness. Such an approach, though, is really inconsistent with the rule of law. Past commissioners, for example, have gone so far as to suggest that practices causing “resource depletion, energy waste, environmental contamination, worker alienation, [and] the psychological and social consequences of producer-stimulated demands” could be unfair methods of competition. Maybe our current commissioners wouldn’t cast so wide a net, but they’re not always going to be in power. A government of laws and not of men simply can’t mete out state power on the basis of whim.
  2. It could bring standalone actions based on unfairness principles appearing in Section 5’s “common law.” The problem here is that there is no such common law. As Commissioner Wright has observed and I have previously explained, a common law doesn’t just happen. Development of a common law requires vigorously litigated disputes and reasoned, published opinions that resolve those disputes and serve as precedent. Section 5 “litigation,” such as it is, doesn’t involve any of that.
    • First, standalone Section 5 disputes tend not to be vigorously litigated. Because the FTC acts as both prosecutor and judge in such actions, their outcome is nearly a foregone conclusion. When FTC staff win before the administrative law judge, the ALJ’s decision is always affirmed by the full commission; when staff loses with the ALJ, the full Commission always reverses. Couple this stacked deck with the fact that unfairness exists in the eye of the beholder and will therefore change with the composition of the Commission, and we end up with a situation in which accused parties routinely settle. As Commissioner Wright observes, “parties will typically prefer to settle a Section 5 claim rather than go through lengthy and costly litigation in which they are both shooting at a moving target and have the chips stacked against them.”
    • The consent decrees that memorialize settlements, then, offer little prospective guidance. They usually don’t include any detailed explanation of why the practice at issue was an unfair method of competition. Even if they did, it wouldn’t matter much; the Commission doesn’t treat its own enforcement decisions as precedent. In light of the realities of Section 5 litigation, there really is no Section 5 common law.
  3. It could refrain from bringing standalone Section 5 actions and pursue only business practices that violate the substantive antitrust laws. Substantive antitrust violations constitute unfair methods of competition, and the federal courts have established fairly workable principles for determining when business practices violate the Sherman and Clayton Acts. The FTC could therefore avoid the “eye of the beholder” problem by limiting its UMC authority to business conduct that violates the antitrust laws. Such an approach, though, would prevent the FTC from policing conduct that, while not technically an antitrust violation, is anticompetitive and injurious to consumers.
  4. It could bring standalone Section 5 actions based on articulated guidelines establishing what constitutes an unfair method of competition. This is really the only way to use Section 5 to pursue business practices that are not otherwise antitrust violations, without offending the rule of law.

Now, if the FTC is to take this fourth approach—the only one that both allows for standalone Section 5 actions and honors rule of law commitments—it obviously has to settle on a set of guidelines.  Fortunately, it has almost done so!

Since Commissioner Wright called for Section 5 guidelines almost two years ago, much ink has been spilled outlining and critiquing proposed guidelines.  Commissioner Wright got the ball rolling by issuing his own proposal along with his call for the adoption of guidelines.  Commissioner Ohlhausen soon followed suit, proposing a slightly broader set of principles.  Numerous commentators then joined the conversation (a number doing so in a TOTM symposium), and each of the other commissioners has now stated her own views.

A good deal of consensus has emerged.  Each commissioner agrees that Section 5 should be used to prosecute only conduct that is actually anticompetitive (as defined by the federal courts).  There is also apparent consensus on the view that standalone Section 5 authority should not be used to challenge conduct governed by well-forged liability principles under the Sherman and Clayton Acts.  (For example, a practice routinely evaluated under Section 2 of the Sherman Act should not be pursued using standalone Section 5 authority.)  The commissioners, and the vast majority of commentators, also agree that there should be some efficiencies screen in prosecution decisions.  The remaining disagreement centers on the scope of the efficiencies screen—i.e., how much of an efficiency benefit must a business practice confer in order to be insulated from standalone Section 5 liability?

On that narrow issue—the only legitimate point of dispute remaining among the commissioners—three views have emerged:  Commissioner Wright would refrain from prosecuting if the conduct at issue creates any cognizable efficiencies; Commissioner Ohlhausen would do so as long as the efficiencies are not disproportionately outweighed by anticompetitive harms; Chairwoman Ramirez would engage in straightforward balancing (not a “disproportionality” inquiry) and would refrain from prosecution only where efficiencies outweigh anticompetitive harms.

That leaves three potential sets of guidelines.  In each, it would be necessary that a behavior subject to any standalone Section 5 action (1) create actual or likely anticompetitive harm, and (2) not be subject to well-forged case law under the traditional antitrust laws (so that pursuing the action might cause the distinction between lawful and unlawful commercial behavior to become blurred).  Each of the three sets of guidelines would also include an efficiencies screen—either (3a) the conduct lacks cognizable efficiencies, (3b) the harms created by the conduct are disproportionate to the conduct’s cognizable efficiencies, or (3c) the harms created by the conduct are not outweighed by cognizable efficiencies.

As Commissioner Wright has observed any one of these sets of guidelines would be superior to the status quo.  Accordingly, if the commissioners could agree on the acceptability of any of them, they could improve the state of U.S. competition law.

Recognizing as much, Commissioner Wright is wisely calling on the commissioners to vote on the acceptability of each set of guidelines.  If any set is deemed acceptable by a majority of commissioners, it should be promulgated as official FTC Guidance.  (Presumably, if more than one set commands majority support, the set that most restrains FTC enforcement authority would be the one promulgated as FTC Guidance.)

Of course, individual commissioners might just choose not to vote.  That would represent a sad abdication of authority.  Given that there isn’t (and under current practice, there can’t be) a common law of Section 5, failure to vote on a set of guidelines would effectively cast a vote for either option 1 stated above (ignore rule of law values) or option 3 (limit Section 5’s potential to enhance consumer welfare).  Let’s hope our commissioners don’t relegate us to those options.

The debate has occurred.  It’s time to vote.