Archives For

Alden Abbott and I recently co-authored an article, forthcoming in the Journal of Competition Law and Economics, in which we examined the degree to which the Supreme Court and the federal enforcement agencies have recognized the inherent limits of antitrust law. We concluded that the Roberts Court has admirably acknowledged those limits and has for the most part crafted liability rules that will maximize antitrust’s social value. The enforcement agencies, by contrast, have largely ignored antitrust’s intrinsic limits. In a number of areas, they have sought to expand antitrust’s reach in ways likely to reduce consumer welfare.

The bright spot in federal antitrust enforcement in the last few years has been Josh Wright. Time and again, he has bucked the antitrust establishment, reminding the mandarins that their goal should not be to stop every instance of anticompetitive behavior but instead to optimize antitrust by minimizing the sum of error costs (from both false negatives and false positives) and decision costs. As Judge Easterbrook famously explained, and as Josh Wright has emphasized more than anyone I know, inevitable mistakes (error costs) and heavy information requirements (decision costs) constrain what antitrust can do. Every liability rule, every defense, every immunity doctrine should be crafted with those limits in mind.

Josh will no doubt be remembered, and justifiably so, for spearheading the effort to provide guidance on how the Federal Trade Commission will exercise its amorphous authority to police “unfair methods of competition.” Several others have lauded Josh’s fine contribution on that matter (as have I), so I won’t gild that lily here. Instead, let me briefly highlight two other areas in which Josh has properly pushed for a recognition of antitrust’s inherent limits.

Vertical Restraints

Vertical restraints—both intrabrand restraints like resale price maintenance (RPM) and interbrand restraints like exclusive dealing—are a competitive mixed bag. Under certain conditions, such restraints may reduce overall market output, causing anticompetitive harm. Under other, more commonly occurring conditions, vertical restraints may enhance market output. Empirical evidence suggests that most vertical restraints are output-enhancing rather than output-reducing. Enforcers taking an optimizing, limits of antitrust approach will therefore exercise caution in condemning or discouraging vertical restraints.

That’s exactly what Josh Wright has done. In an early post-Leegin RPM order predating Josh’s tenure, the FTC endorsed a liability rule that placed an inappropriately heavy burden on RPM defendants. Josh later laid the groundwork for correcting that mistake, advocating a much more evidence-based (and defendant-friendly) RPM rule. In the McWane case, the Commission condemned an exclusive dealing arrangement that had been in place for long enough to cause anticompetitive harm but hadn’t done so. Josh rightly called out the majority for elevating theoretical harm over actual market evidence. (Adopting a highly deferential stance, the Eleventh Circuit affirmed the Commission majority, but Josh was right to criticize the majority’s implicit hostility toward exclusive dealing.) In settling the Graco case, the Commission again went beyond the evidence, requiring the defendant to cease exclusive dealing and to stop giving loyalty rebates even though there was no evidence that either sort of vertical restraint contributed to the anticompetitive harm giving rise to the action at issue. Josh rightly took the Commission to task for reflexively treating vertical restraints as suspect when they’re usually procompetitive and had an obvious procompetitive justification (avoidance of interbrand free-riding) in the case at hand.

Horizontal Mergers

Horizontal mergers, like vertical restraints, are competitive mixed bags. Any particular merger of competitors may impose some consumer harm by reducing the competition facing the merged firm. The same merger, though, may provide some consumer benefit by lowering the merged firm’s costs and thereby allowing it to compete more vigorously (most notably, by lowering its prices). A merger policy committed to minimizing the consumer welfare losses from unwarranted condemnations of net beneficial mergers and improper acquittals of net harmful ones would afford equal treatment to claims of anticompetitive harm and procompetitive benefit, requiring each to be established by the same quantum of proof.

The federal enforcement agencies’ new Horizontal Merger Guidelines, however, may put a thumb on the scale, tilting the balance toward a finding of anticompetitive harm. The Guidelines make it easier for the agencies to establish likely anticompetitive harm. Enforcers may now avoid defining a market if they point to adverse unilateral effects using the gross upward pricing pressure index (GUPPI). The merging parties, by contrast, bear a heavy burden when they seek to show that their contemplated merger will occasion efficiencies. They must: (1) prove that any claimed efficiencies are “merger-specific” (i.e., incapable of being achieved absent the merger); (2) “substantiate” asserted efficiencies; and (3) show that such efficiencies will result in the very markets in which the agencies have established likely anticompetitive effects.

In an important dissent (Ardagh), Josh observed that the agencies’ practice has evolved such that there are asymmetric burdens in establishing competitive effects, and he cautioned that this asymmetry will enhance error costs. (Geoff praised that dissent here.) In another dissent (Family Dollar/Dollar Tree), Josh acknowledged some potential problems with the promising but empirically unverified GUPPI, and he wisely advocated the creation of safe harbors for mergers generating very low GUPPI scores. (I praised that dissent here.)

I could go on and on, but these examples suffice to illustrate what has been, in my opinion, Josh’s most important contribution as an FTC commissioner: his constant effort to strengthen antitrust’s effectiveness by acknowledging its inevitable and inexorable limits. Coming on the heels of the FTC’s and DOJ’s rejection of the Section 2 Report—a document that was highly attuned to antitrust’s limits—Josh was just what antitrust needed.

FTC Commissioner Josh Wright has some wise thoughts on how to handle a small GUPPI. I don’t mean the fish. Dissenting in part in the Commission’s disposition of the Family Dollar/Dollar Tree merger, Commissioner Wright calls for creating a safe harbor for mergers where the competitive concern is unilateral effects and the merger generates a low score on the “Gross Upward Pricing Pressure Index,” or “GUPPI.”

Before explaining why Wright is right on this one, some quick background on the GUPPI. In 2010, the DOJ and FTC revised their Horizontal Merger Guidelines to reflect better the actual practices the agencies follow in conducting pre-merger investigations. Perhaps the most notable new emphasis in the revised guidelines was a move away from market definition, the traditional starting point for merger analysis, and toward consideration of potentially adverse “unilateral” effects—i.e., anticompetitive harms that, unlike collusion or even non-collusive oligopolistic pricing, need not involve participation of any non-merging firms in the market. The primary unilateral effect emphasized by the new guidelines is that the merger may put “upward pricing pressure” on brand-differentiated but otherwise similar products sold by the merging firms. The guidelines maintain that when upward pricing pressure seems significant, it may be unnecessary to define the relevant market before concluding that an anticompetitive effect is likely.

The logic of upward pricing pressure is straightforward. Suppose five firms sell competing products (Products A-E) that, while largely substitutable, are differentiated by brand. Given the brand differentiation, some of the products are closer substitutes than others. If the closest substitute to Product A is Product B and vice-versa, then a merger between Producer A and Producer B may result in higher prices even if the remaining producers (C, D, and E) neither raise their prices nor reduce their output. The merged firm will know that if it raises the price of Product A, most of the lost sales will be diverted to Product B, which that firm also produces. Similarly, sales diverted from Product B will largely flow to Product A. Thus, the merged company, seeking to maximize its profits, may face pressure to raise the prices of Products A and/or B.

The GUPPI seeks to assess the likelihood, absent countervailing efficiencies, that the merged firm (e.g., Producer A combined with Producer B) would raise the price of one of its competing products (e.g., Product A), causing some of the lost sales on that product to be diverted to its substitute (e.g., Product B). The GUPPI on Product A would thus consist of:

The Value of Sales Diverted to Product B
Foregone Revenues on Lost Product A Sales.

The value of sales diverted to Product B, the numerator, is equal to the number of units diverted from Product A to Product B times the profit margin (price minus marginal cost) on Product B. The foregone revenues on lost Product A sales, the denominator, is equal to the number of lost Product A sales times the price of Product A. Thus, the fraction set forth above is equal to:

Number of A Sales Diverted to B * Unit Margin on B
Number of A Sales Lost * Price of A.

The Guidelines do not specify how high the GUPPI for a particular product must be before competitive concerns are raised, but they do suggest that at some point, the GUPPI is so small that adverse unilateral effects are unlikely. (“If the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.”) Consistent with this observation, DOJ’s Antitrust Division has concluded that a GUPPI of less than 5% will not give rise to a merger challenge.

Commissioner Wright has split with his fellow commissioners over whether the FTC should similarly adopt a safe harbor for horizontal mergers where the adverse competitive concern is unilateral effects and the GUPPIs are less than 5%. Of the 330 markets in which the Commission is requiring divestiture of stores, 27 involve GUPPIs of less than 5%. Commissioner Wright’s position is that the combinations in those markets should be deemed to fall within a safe harbor. At the very least, he says, there should be some safe harbor for very small GUPPIs, even if it kicks in somewhere below the 5% level. The Commission has taken the position that there should be no safe harbor for mergers where the competitive concern is unilateral effects, no matter how low the GUPPI. Instead, the Commission majority says, GUPPI is just a starting point; once the GUPPIs are calculated, each market should be assessed in light of qualitative factors, and a gestalt-like, “all things considered” determination should be made.

The Commission majority purports to have taken this approach in the Family Dollar/Dollar Tree case. It claims that having used GUPPI to identify some markets that were presumptively troubling (markets where GUPPIs were above a certain level) and others that were presumptively not troubling (low-GUPPI markets), it went back and considered qualitative evidence for each, allowing the presumption to be rebutted where appropriate. As Commissioner Wright observes, though, the actual outcome of this purported process is curious: almost none of the “presumptively anticompetitive” markets were cleared based on qualitative evidence, whereas 27 of the “presumptively competitive” markets were slated for a divestiture despite the low GUPPI. In practice, the Commission seems to be using high GUPPIs to condemn unilateral effects mergers, while not allowing low GUPPIs to acquit them. Wright, by contrast, contends that a low-enough GUPPI should be sufficient to acquit a merger where the only plausible competitive concern is adverse unilateral effects.

He’s right on this, for at least five reasons.

  1. Virtually every merger involves a positive GUPPI. As long as any sales would be diverted from one merging firm to the other and the firms are pricing above cost (so that there is some profit margin on their products), a merger will involve a positive GUPPI. (Recall that the numerator in the GUPPI is “number of diverted sales * profit margin on the product to which sales are diverted.”) If qualitative evidence must be considered and a gestalt-like decision made in even low-GUPPI cases, then that’s the approach that will always be taken and GUPPI data will be essentially irrelevant.
  2. Calculating GUPPIs is hard. Figuring the GUPPI requires the agencies to make some difficult determinations. Calculating the “diversion ratio” (the percentage of lost A sales that are diverted to B when the price of A is raised) requires determinations of A’s “own-price elasticity of demand” as well as the “cross-price elasticity of demand” between A and B. Calculating the profit margin on B requires determining B’s marginal cost. Assessing elasticity of demand and marginal cost is notoriously difficult. This difficulty matters here for a couple of reasons:
    • First, why go through the difficult task of calculating GUPPIs if they won’t simplify the process of evaluating a merger? Under the Commission’s purported approach, once GUPPI is calculated, enforcers still have to consider all the other evidence and make an “all things considered” judgment. A better approach would be to cut off the additional analysis if the GUPPI is sufficiently small.
    • Second, given the difficulty of assessing marginal cost (which is necessary to determine the profit margin on the product to which sales are diverted), enforcers are likely to use a proxy, and the most commonly used proxy for marginal cost is average variable cost (i.e., the total non-fixed costs of producing the products at issue divided by the number of units produced). Average variable cost, though, tends to be smaller than marginal cost over the relevant range of output, which will cause the profit margin (price – “marginal” cost) on the product to which sales are diverted to appear higher than it actually is. And that will tend to overstate the GUPPI. Thus, at some point, a positive but low GUPPI should be deemed insignificant.
  3. The GUPPI is biased toward an indication of anticompetitive effect. GUPPI attempts to assess gross upward pricing pressure. It takes no account of factors that tend to prevent prices from rising. In particular, it ignores entry and repositioning by other product-differentiated firms, factors that constrain the merged firm’s ability to raise prices. It also ignores merger-induced efficiencies, which tend to put downward pressure on the merged firm’s prices. (Granted, the merger guidelines call for these factors to be considered eventually, but the factors are generally subject to higher proof standards. Efficiencies, in particular, are pretty difficulty to establish under the guidelines.) The upshot is that the GUPPI is inherently biased toward an indication of anticompetitive harm. A safe harbor for mergers involving low GUPPIs would help counter-balance this built-in bias.
  4. Divergence from DOJ’s approach will create an arbitrary result. The FTC and DOJ’s Antitrust Division share responsibility for assessing proposed mergers. Having the two enforcement agencies use different standards in their evaluations injects a measure of arbitrariness into the law. In the interest of consistency, predictability, and other basic rule of law values, the agencies should get on the same page. (And, for reasons set forth above, DOJ’s is the better one.)
  5. A safe harbor is consistent with the Supreme Court’s decision-theoretic antitrust jurisprudence. In recent years, the Supreme Court has generally crafted antitrust rules to optimize the costs of errors and of making liability judgments (or, put differently, to “minimize the sum of error and decision costs”). On a number of occasions, the Court has explicitly observed that it is better to adopt a rule that will allow the occasional false acquittal if doing so will prevent greater costs from false convictions and administration. The Brooke Group rule that there can be no predatory pricing liability absent below-cost pricing, for example, is expressly not premised on the belief that low, but above-cost, pricing can never be anticompetitive; rather, the rule is justified on the ground that the false negatives it allows are less costly than the false positives and administrative difficulties a more “theoretically perfect” rule would generate. Indeed, the Supreme Court’s antitrust jurisprudence seems to have wholeheartedly endorsed Voltaire’s prudent aphorism, “The perfect is the enemy of the good.” It is thus no answer for the Commission to observe that adverse unilateral effects can sometimes occur when a combination involves a low (<5%) GUPPI. Low but above-cost pricing can sometimes be anticompetitive, but Brooke Group’s safe harbor is sensible and representative of the approach the Supreme Court thinks antitrust should take. The FTC should get on board.

One final point. It is important to note that Commissioner Wright is not saying—and would be wrong to say—that a high GUPPI should be sufficient to condemn a merger. The GUPPI has never been empirically verified as a means of identifying anticompetitive mergers. As Dennis Carlton observed, “[T]he use of UPP as a merger screen is untested; to my knowledge, there has been no empirical analysis that has been performed to validate its predictive value in assessing the competitive effects of mergers.” Dennis W. Carlton, Revising the Horizontal Merger Guidelines, 10 J. Competition L. & Econ. 1, 24 (2010). This dearth of empirical evidence seems especially problematic in light of the enforcement agencies’ spotty track record in predicting the effects of mergers. Craig Peters, for example, found that the agencies’ merger simulations produced wildly inaccurate predictions about the price effects of airline mergers. See Craig Peters, Evaluating the Performance of Merger Simulation: Evidence from the U.S. Airline Industry, 49 J.L. & Econ. 627 (2006). Professor Carlton thus warns (Carlton, supra, at 32):

UPP is effectively a simplified version of merger simulation. As such, Peters’s findings tell a cautionary tale—more such studies should be conducted before one treats UPP, or any other potential merger review method, as a consistently reliable methodology by which to identify anticompetitive mergers.

The Commission majority claims to agree that a high GUPPI alone should be insufficient to condemn a merger. But the actual outcome of the analysis in the case at hand—i.e., finding almost all combinations involving high GUPPIs to be anticompetitive, while deeming the procompetitive presumption to be rebutted in 27 low-GUPPI cases—suggests that the Commission is really allowing high GUPPIs to “prove” that anticompetitive harm is likely.

The point of dispute between Wright and the other commissioners, though, is about how to handle low GUPPIs. On that question, the Commission should either join the DOJ in recognizing a safe harbor for low-GUPPI mergers or play it straight with the public and delete the Horizontal Merger Guidelines’ observation that “[i]f the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.” The better approach would be to affirm the Guidelines and recognize a safe harbor.

Anybody who has spent much time with children knows how squishy a concept “unfairness” can be.  One can hear the exchange, “He’s not being fair!” “No, she’s not!,” only so many times before coming to understand that unfairness is largely in the eye of the beholder.

Perhaps it’s unfortunate, then, that Congress chose a century ago to cast the Federal Trade Commission’s authority in terms of preventing “unfair methods of competition.”  But that’s what it did, and the question now is whether there is some way to mitigate this “eye of the beholder” problem.

There is.

We know that any business practice that violates the substantive antitrust laws (the Sherman and Clayton Acts) is an unfair method of competition, so we can look to Sherman and Clayton Act precedents to assess the “unfairness” of business practices that those laws reach.  But what about the Commission’s so-called “standalone” UMC authority—its power to prevent business practices that seem to impact competition unfairly but are not technically violations of the substantive antitrust laws?

Almost two years ago, Commissioner Josh Wright recognized that if the FTC’s standalone UMC authority is to play a meaningful role in assuring market competition, the Commission should issue guidelines on what constitutes an unfair method of competition. He was right.  The Commission, you see, really has only four options with respect to standalone Section 5 claims:

  1. It could bring standalone actions based on current commissioners’ considered judgments about what constitutes unfairness. Such an approach, though, is really inconsistent with the rule of law. Past commissioners, for example, have gone so far as to suggest that practices causing “resource depletion, energy waste, environmental contamination, worker alienation, [and] the psychological and social consequences of producer-stimulated demands” could be unfair methods of competition. Maybe our current commissioners wouldn’t cast so wide a net, but they’re not always going to be in power. A government of laws and not of men simply can’t mete out state power on the basis of whim.
  2. It could bring standalone actions based on unfairness principles appearing in Section 5’s “common law.” The problem here is that there is no such common law. As Commissioner Wright has observed and I have previously explained, a common law doesn’t just happen. Development of a common law requires vigorously litigated disputes and reasoned, published opinions that resolve those disputes and serve as precedent. Section 5 “litigation,” such as it is, doesn’t involve any of that.
    • First, standalone Section 5 disputes tend not to be vigorously litigated. Because the FTC acts as both prosecutor and judge in such actions, their outcome is nearly a foregone conclusion. When FTC staff win before the administrative law judge, the ALJ’s decision is always affirmed by the full commission; when staff loses with the ALJ, the full Commission always reverses. Couple this stacked deck with the fact that unfairness exists in the eye of the beholder and will therefore change with the composition of the Commission, and we end up with a situation in which accused parties routinely settle. As Commissioner Wright observes, “parties will typically prefer to settle a Section 5 claim rather than go through lengthy and costly litigation in which they are both shooting at a moving target and have the chips stacked against them.”
    • The consent decrees that memorialize settlements, then, offer little prospective guidance. They usually don’t include any detailed explanation of why the practice at issue was an unfair method of competition. Even if they did, it wouldn’t matter much; the Commission doesn’t treat its own enforcement decisions as precedent. In light of the realities of Section 5 litigation, there really is no Section 5 common law.
  3. It could refrain from bringing standalone Section 5 actions and pursue only business practices that violate the substantive antitrust laws. Substantive antitrust violations constitute unfair methods of competition, and the federal courts have established fairly workable principles for determining when business practices violate the Sherman and Clayton Acts. The FTC could therefore avoid the “eye of the beholder” problem by limiting its UMC authority to business conduct that violates the antitrust laws. Such an approach, though, would prevent the FTC from policing conduct that, while not technically an antitrust violation, is anticompetitive and injurious to consumers.
  4. It could bring standalone Section 5 actions based on articulated guidelines establishing what constitutes an unfair method of competition. This is really the only way to use Section 5 to pursue business practices that are not otherwise antitrust violations, without offending the rule of law.

Now, if the FTC is to take this fourth approach—the only one that both allows for standalone Section 5 actions and honors rule of law commitments—it obviously has to settle on a set of guidelines.  Fortunately, it has almost done so!

Since Commissioner Wright called for Section 5 guidelines almost two years ago, much ink has been spilled outlining and critiquing proposed guidelines.  Commissioner Wright got the ball rolling by issuing his own proposal along with his call for the adoption of guidelines.  Commissioner Ohlhausen soon followed suit, proposing a slightly broader set of principles.  Numerous commentators then joined the conversation (a number doing so in a TOTM symposium), and each of the other commissioners has now stated her own views.

A good deal of consensus has emerged.  Each commissioner agrees that Section 5 should be used to prosecute only conduct that is actually anticompetitive (as defined by the federal courts).  There is also apparent consensus on the view that standalone Section 5 authority should not be used to challenge conduct governed by well-forged liability principles under the Sherman and Clayton Acts.  (For example, a practice routinely evaluated under Section 2 of the Sherman Act should not be pursued using standalone Section 5 authority.)  The commissioners, and the vast majority of commentators, also agree that there should be some efficiencies screen in prosecution decisions.  The remaining disagreement centers on the scope of the efficiencies screen—i.e., how much of an efficiency benefit must a business practice confer in order to be insulated from standalone Section 5 liability?

On that narrow issue—the only legitimate point of dispute remaining among the commissioners—three views have emerged:  Commissioner Wright would refrain from prosecuting if the conduct at issue creates any cognizable efficiencies; Commissioner Ohlhausen would do so as long as the efficiencies are not disproportionately outweighed by anticompetitive harms; Chairwoman Ramirez would engage in straightforward balancing (not a “disproportionality” inquiry) and would refrain from prosecution only where efficiencies outweigh anticompetitive harms.

That leaves three potential sets of guidelines.  In each, it would be necessary that a behavior subject to any standalone Section 5 action (1) create actual or likely anticompetitive harm, and (2) not be subject to well-forged case law under the traditional antitrust laws (so that pursuing the action might cause the distinction between lawful and unlawful commercial behavior to become blurred).  Each of the three sets of guidelines would also include an efficiencies screen—either (3a) the conduct lacks cognizable efficiencies, (3b) the harms created by the conduct are disproportionate to the conduct’s cognizable efficiencies, or (3c) the harms created by the conduct are not outweighed by cognizable efficiencies.

As Commissioner Wright has observed any one of these sets of guidelines would be superior to the status quo.  Accordingly, if the commissioners could agree on the acceptability of any of them, they could improve the state of U.S. competition law.

Recognizing as much, Commissioner Wright is wisely calling on the commissioners to vote on the acceptability of each set of guidelines.  If any set is deemed acceptable by a majority of commissioners, it should be promulgated as official FTC Guidance.  (Presumably, if more than one set commands majority support, the set that most restrains FTC enforcement authority would be the one promulgated as FTC Guidance.)

Of course, individual commissioners might just choose not to vote.  That would represent a sad abdication of authority.  Given that there isn’t (and under current practice, there can’t be) a common law of Section 5, failure to vote on a set of guidelines would effectively cast a vote for either option 1 stated above (ignore rule of law values) or option 3 (limit Section 5’s potential to enhance consumer welfare).  Let’s hope our commissioners don’t relegate us to those options.

The debate has occurred.  It’s time to vote.

Section 5 of the Federal Trade Commission Act proclaims that “[u]nfair methods of competition . . . are hereby declared unlawful.” The FTC has exclusive authority to enforce that provision and uses it to prosecute Sherman Act violations. The Commission also uses the provision to prosecute conduct that doesn’t violate the Sherman Act but is, in the Commission’s view, an “unfair method of competition.”

That’s somewhat troubling, for “unfairness” is largely in the eye of the beholder. One FTC Commissioner recently defined an unfair method of competition as an action that is “‘collusive, coercive, predatory, restrictive, or deceitful,’ or otherwise oppressive, [where the actor lacks] a justification grounded in its legitimate, independent self-interest.” Some years ago, a commissioner observed that a “standalone” Section 5 action (i.e., one not premised on conduct that would violate the Sherman Act) could be used to police “social and environmental harms produced as unwelcome by-products of the marketplace: resource depletion, energy waste, environmental contamination, worker alienation, the psychological and social consequences of producer-stimulated demands.” While it’s unlikely that any FTC Commissioner would go that far today, the fact remains that those subject to Section 5 really don’t know what it forbids.  And that situation flies in the face of the Rule of Law, which at a minimum requires that those in danger of state punishment know in advance what they’re not allowed to do.

In light of this fundamental Rule of Law problem (not to mention the detrimental chilling effect vague competition rules create), many within the antitrust community have called for the FTC to provide guidance on the scope of its “unfair methods of competition” authority. Most notably, two members of the five-member FTC—Commissioners Maureen Ohlhausen and Josh Wright—have publicly called for the Commission to promulgate guidelines. So have former FTC Chairman Bill Kovacic, a number of leading practitioners, and a great many antitrust scholars.

Unfortunately, FTC Chairwoman Edith Ramirez has opposed the promulgation of Section 5 guidelines. She says she instead “favor[s] the common law approach, which has been a mainstay of American antitrust policy since the turn of the twentieth century.” Chairwoman Ramirez observes that the common law method has managed to distill workable liability rules from broad prohibitions in the primary antitrust statutes. Section 1 of the Sherman Act, for example, provides that “[e]very contract, combination … or conspiracy, in restraint of trade … is declared to be illegal.” Section 2 prohibits actions to “monopolize, or attempt to monopolize … any part of … trade.” Clayton Act Section 7 forbids any merger whose effect “may be substantially to lessen competition, or tend to create a monopoly.” Just as the common law transformed these vague provisions into fairly clear liability rules, the Chairwoman says, it can be used to provide adequate guidance on Section 5.

The problem is, there is no Section 5 common law. As Commissioner Wright and his attorney-advisor Jan Rybnicek explain in a new paper, development of a common law—which concededly may be preferable to a prescriptive statutory approach, given its flexibility, ability to evolve with new learning, and sensitivity to time- and place-specific factors—requires certain conditions that do not exist in the Section 5 context.

The common law develops and evolves in a salutary direction because (1) large numbers of litigants do their best to persuade adjudicators of the superiority of their position; (2) the closest cases—those requiring the adjudicator to make fine distinctions—get appealed and reported; (3) the adjudicators publish opinions that set forth all relevant facts, the arguments of the parties, and why one side prevailed over the other; (4) commentators criticize published opinions that are unsound or rely on welfare-reducing rules; (5) adjudicators typically follow past precedents, tweaking (or occasionally overruling) them when they have been undermined; and (6) future parties rely on past decisions when planning their affairs.

Section 5 “adjudication,” such as it is, doesn’t look anything like this. Because the Commission has exclusive authority to bring standalone Section 5 actions, it alone picks the disputes that could form the basis of any common law. It then acts as both prosecutor and judge in the administrative action that follows. Not surprisingly, defendants, who cannot know the contours of a prohibition that will change with the composition of the Commission and who face an inherently biased tribunal, usually settle quickly. After all, they are, in Commissioner Wright’s words, both “shooting at a moving target and have the chips stacked against them.” As a result, we end up with very few disputes, and even those are not vigorously litigated.

Moreover, because nearly all standalone Section 5 actions result in settlements, we almost never end up with a reasoned opinion from an adjudicator explaining why she did or did not find liability on the facts at hand and why she rejected the losing side’s arguments. These sorts of opinions are absolutely crucial for the development of the common law. Chairwoman Ramirez says litigants can glean principles from other administrative documents like complaints and consent agreements, but those documents can’t substitute for a reasoned opinion that parses arguments and says which work, which don’t, and why. On top of all this, the FTC doesn’t even treat its own enforcement decisions as precedent! How on earth could the Commission’s body of enforcement decisions guide decision-making when each could well be a one-off?

I’m a huge fan of the common law. It generally accommodates the Hayekian “knowledge problem” far better than inflexible, top-down statutes. But it requires both inputs—lots of vigorously litigated disputes—and outputs—reasoned opinions that are recognized as presumptively binding. In the Section 5 context, we’re short on both. It’s time for guidelines.

PayPal co-founder Peter Thiel has a terrific essay in the Review section of today’s Wall Street Journal.  The essay, Competition Is for Losers, is adapted from Mr. Thiel’s soon-to-be-released book, Zero to One: Notes on Startups, or How to Build the Future.  Based on the title of the book, I assume it is primarily a how-to guide for entrepreneurs.  But if the rest of the book is anything like the essay in today’s Journal, it will also offer lots of guidance to policy makers–antitrust officials in particular.

We antitrusters usually begin with the assumption that monopoly is bad and perfect competition is good. That’s the starting point for most antitrust courses: the professor lays out the model of perfect competition, points to all the wealth it creates and how that wealth is distributed (more to consumers than to producers), and contrasts it to the monopoly pricing model, with its steep marginal revenue curve, hideous “deadweight loss” triangle, and unseemly redistribution of surplus from consumers to producers. Which is better, kids?  Why, perfect competition, of course!

Mr. Thiel makes the excellent and oft-neglected point that monopoly power is not necessarily a bad thing. First, monopolists can do certain good things that perfect competitors can’t do:

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google’s motto–“Don’t be evil”–is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence.  In business, money is either an important thing or it is everything. Monopolists can think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

Fair enough, Thiel. But what about consumers? That model we learned shows us that they’re worse off under monopoly.  And what about the deadweight loss triangle–don’t forget about that ugly thing! 

So a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsize profits come at the expense of the rest of society? Actually, yes: Profits come out of customers’ wallets, and monopolies deserve their bad reputations–but only in a world where nothing changes.

Wait a minute, Thiel. Why do you think things are different when we inject “change” into the analysis?

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: Deeds are shuffled around from player to player, but the board never changes. There is no way to win by inventing a better kind of real estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up.

But the world we live in is dynamic: We can invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

Even the government knows this: That is why one of the departments works hard to create monopolies (by granting patents to new inventions) even though another part hunts them down (by prosecuting antitrust cases). It is possible to question whether anyone should really be rewarded a monopoly simply for having been the first to think of something like a mobile software design. But something like Apple’s monopoly profits from designing, producing and marketing the iPhone were clearly the reward for creating greater abundance, not artificial scarcity: Customers were happy to finally have the choice of paying high prices to get a smartphone that actually works. The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decadeslong operating system dominance.

…If the tendency of monopoly businesses was to hold back progress, they would be dangerous, and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and finance the ambitious research projects that firms locked in competition can’t dream of.

Geez, Thiel.  You know who you sound like?  Justice Scalia. Here’s how he once explained your idea (to shrieks and howls from many in the antitrust establishment!):

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices–at least for a short period–is what attracts “business acumen” in the first place. It induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Sounds like you and Scalia are calling for us antitrusters to update our models.  Is that it?

So why are economists obsessed with competition as an ideal state? It is a relic of history. Economists copied their mathematics from the work of 19th-century physicists: They see individuals and businesses as interchangeable atoms, not as unique creators. Their theories describe an equilibrium state of perfect competition because that is what’s easy to model, not because it represents the best of business.

C’mon now, Thiel. Surely you don’t expect us antitrusters to defer to you over all these learned economists when it comes to business.

Anyone interested in antitrust enforcement policy (and what TOTM reader isn’t?) should read FTC Commissioner Josh Wright’s interview in the latest issue of The Antitrust Source.  The extensive (22 page!) interview covers a number of topics and demonstrates the positive influence Commissioner Wright is having on antitrust enforcement and competition policy in general.

Commissioner Wright’s consistent concern with minimizing error costs will come as no surprise to TOTM regulars.  Here are a few related themes emphasized in the interview:

A commitment to evidence-based antitrust.

Asked about his prior writings on the superiority of “evidence-based” antitrust analysis, Commissioner Wright explains the concept as follows:

The central idea is to wherever possible shift away from casual empiricism and intuitions as the basis for decision-making and instead commit seriously to the decision-theoretic framework applied to minimize the costs of erroneous enforcement and policy decisions and powered by the best available theory and evidence.

This means, of course, that discrete enforcement decisions – should we bring a challenge or not? – should be based on the best available empirical evidence about the effects of the practice or transaction at issue. But it also encompasses a commitment to design institutions and structure liability rules on the basis of the best available evidence concerning a practice’s tendency to occasion procompetitive or anticompetitive effects. As Wright explains:

Evidence-based antitrust encompasses a commitment to using the best available economic theory and empirical evidence to make [a discrete enforcement] decision; but it also stands for a much broader commitment to structuring antitrust enforcement and policy decision-making. For example, evidence-based antitrust is a commitment that would require an enforcement agency seeking to design its policy with respect to a particular set of business arrangements – loyalty discounts, for example – to rely upon the existing theory and empirical evidence in calibrating that policy.

Of course, if the FTC is committed to evidence-based antitrust policy, then it will utilize its institutional advantages to enhance the empirical record on practices whose effects are unclear. Thus, Commissioner Wright lauds the FTC’s study of – rather than preemptive action against – patent assertion entities, calling it “precisely the type of activity that the FTC is well-suited to do.”

A commitment to evidence-based antitrust also means that the agency shouldn’t get ahead of itself in restricting conduct with known consumer benefits and only theoretical (i.e., not empirically established) harms. Accordingly, Commissioner Wright says he “divorced [him]self from a number of recommendations” in the FTC’s recent data broker report:

For the majority of these other recommendations [beyond basic disclosure requirements], I simply do not think that we have any evidence that the benefits from Congress adopting those recommendations would exceed the costs. … I would need to have some confidence based on evidence, especially about an area where evidence is scarce. I’m not comfortable relying on my priors about these activities, especially when confronted by something new that could be beneficial. … The danger would be that we recommend actions that either chill some of the beneficial activity the data brokers engage in or just impose compliance costs that we all recognize get passed on to consumers.

Similarly, Commissioner Wright has opposed “fencing-in” relief in consent decrees absent evidence that the practice being restricted threatens more harm than good. As an example, he points to the consent decree in the Graco case, which we discussed here:

Graco employed exclusive dealing contracts, but we did not allege that the exclusive dealing contracts violated the antitrust laws or Section 5. However, as fencing-in relief for the consummated merger, the consent included prohibitions on exclusive dealing and loyalty discounts despite there being no evidence that the firm had employed either of those tactics to anticompetitive ends. When an FTC settlement bans a form of discounting as standard injunctive relief in a merger case without convincing evidence that the discounts themselves were a competitive problem, it raises significant concerns.

A commitment to clear enforcement principles.

At several points throughout the interview, Commissioner Wright emphasizes the value of articulating clear principles that can guide business planners’ behavior. But he’s not calling for a bunch of ex ante liability rules. The old per se rule against minimum resale price maintenance, for example, was clear – and bad! Embracing overly broad liability rules for the sake of clarity is inconsistent with the evidence-based, decision-theoretic approach Commissioner Wright prefers. The clarity he is advocating, then, is clarity on broad principles that will govern enforcement decisions.  He thus reiterates his call for a formal policy statement defining the Commission’s authority to prosecute unfair methods of competition under Section 5 of the FTC Act.  (TOTM hosted a blog symposium on that topic last summer.)  Wright also suggests that the Commission should “synthesize and offer high-level principles that would provide additional guidance” on how the Commission will use its Section 5 authority to address data security matters.

Extension, not extraction, should be the touchstone for Section 2 liability.

When asked about his prior criticism of FTC actions based on alleged violations of licensing commitments to standards development organizations (e.g., N-Data), Commissioner Wright emphasized that there should be no Section 2 liability in such cases, or similar cases involving alleged patent hold-up, absent an extension of monopoly power. In other words, it is not enough to show that the alleged bad act resulted in higher prices; it must also have led to the creation, maintenance, or enhancement of monopoly power.  Wright explains:

The logic is relatively straightforward. The antitrust laws do not apply to all increases of price. The Sherman Act is not a price regulation statute. The antitrust laws govern the competitive process. The Supreme Court said in Trinko that a lawful monopolist is allowed to charge the monopoly price. In NYNEX, the Supreme Court held that even if that monopolist raises its price through bad conduct, so long as that bad conduct does not harm the competitive process, it does not violate the antitrust laws. The bad conduct may violate other laws. It may be a fraud problem, it might violate regulatory rules, it may violate all sorts of other areas of law. In the patent context, it might give rise to doctrines like equitable estoppel. But it is not an antitrust problem; antitrust cannot be the hammer for each and every one of the nails that implicate price changes.

In my view, the appropriate way to deal with patent holdup cases is to require what we require for all Section 2 cases. We do not need special antitrust rules for patent holdup; much less for patent assertion entities. The rule is simply that the plaintiff must demonstrate that the conduct results in the acquisition of market power, not merely the ability to extract existing monopoly rents. … That distinction between extracting lawfully acquired and existing monopoly rents and acquiring by unlawful conduct additional monopoly power is one that has run through Section 2 jurisprudence for quite some time.

In light of these remarks (which remind me of this excellent piece by Dennis Carlton and Ken Heyer), it is not surprising that Commissioner Wright also hopes and believes that the Roberts Court will overrule Jefferson Parish’s quasi-per se rule against tying. As Einer Elhauge has observed, that rule might make sense if the mere extraction of monopoly profits (via metering price discrimination or Loew’s-type bundling) was an “anticompetitive” effect of tying.  If, however, anticompetitive harm requires extension of monopoly power, as Wright contends, then a tie-in cannot be anticompetitive unless it results in substantial foreclosure of the tied product market, a necessary prerequisite for a tie-in to enhance market power in the tied or tying markets.  That means tying should not be evaluated under the quasi-per se rule but should instead be subject to a rule of reason similar to that governing exclusive dealing (i.e., some sort of “qualitative foreclosure” approach).  (I explain this point in great detail here.)

Optimal does not mean perfect.

Commissioner Wright makes this point in response to a question about whether the government should encourage “standards development organizations to provide greater clarity to their intellectual property policies to reduce the likelihood of holdup or other concerns.”  While Wright acknowledges that “more complete, more precise contracts” could limit the problem of patent holdup, he observes that there is a cost to greater precision and completeness and that the parties to these contracts already have an incentive to put the optimal amount of effort into minimizing the cost of holdup. He explains:

[M]inimizing the probability of holdup does not mean that it is zero. Holdup can happen. It will happen. It will be observed in the wild from time to time, and there is again an important question about whether antitrust has any role to play there. My answer to that question is yes in the case of deception that results in market power. Otherwise, we ought to leave the governance of what amount to contracts between SSO and their members to contract law and in some cases to patent doctrines like equitable estoppel that can be helpful in governing holdup.

…[I]t is quite an odd thing for an agency to be going out and giving advice to sophisticated parties on how to design their contracts. Perhaps I would be more comfortable if there were convincing and systematic evidence that the contracts were the result of market failure. But there is not such evidence.

Consumer welfare is the touchstone.

When asked whether “there [are] circumstances where non-competition concerns, such as privacy, should play a role in merger analysis,” Commissioner Wright is unwavering:

No. I think that there is a great danger when we allow competition law to be unmoored from its relatively narrow focus upon consumer welfare. It is the connection between the law and consumer welfare that allows antitrust to harness the power of economic theory and empirical methodologies. All of the gains that antitrust law and policy as a body have earned over the past fifty or sixty years have been from becoming more closely tethered to industrial organization economics, more closely integrating economic thought in the law, and in agency discretion and decision-making. I think that the tight link between the consumer welfare standard and antitrust law is what has allowed such remarkable improvements in what effectively amounts to a body of common law.

Calls to incorporate non-economic concerns into antitrust analysis, I think, threaten to undo some, if not all, of that progress. Antitrust law and enforcement in the United States has some experience with trying to incorporate various non-economic concerns, including the welfare of small dealers and worthy men and so forth. The results of the experiment were not good for consumers and did not generate sound antitrust policy. It is widely understood and recognized why that is the case.


Those are just some highlights. There’s lots more in the interview—in particular, some good stuff on the role of efficiencies in FTC investigations, the diverging standards for the FTC and DOJ to obtain injunctions against unconsummated mergers, and the proper way to analyze reverse payment settlements.  Do read the whole thing.  If you’re like me, it may make you feel a little more affinity for Mitch McConnell.

Today is the last day for public comment on the Federal Communications Commission’s latest net neutrality proposal.  Here are two excellent op-eds on the matter, one by former FCC Commissioner Robert McDowell and the other by Tom Hazlett and TOTM’s own Josh Wright.  Hopefully, the Commission will take to heart the pithy observation of one of my law school friends, Commissioner Ajit Pai:  “The Internet was free and open before the FCC adopted net neutrality rules. It remains free and open today. Net neutrality has always been a solution in search of a problem.”

Last Monday, a group of nineteen scholars of antitrust law and economics, including yours truly, urged the U.S. Court of Appeals for the Eleventh Circuit to reverse the Federal Trade Commission’s recent McWane ruling.

McWane, the largest seller of domestically produced iron pipe fittings (DIPF), would sell its products only to distributors that “fully supported” its fittings by carrying them exclusively.  There were two exceptions: where McWane products were not readily available, and where the distributor purchased a McWane rival’s pipe along with its fittings.  A majority of the FTC ruled that McWane’s policy constituted illegal exclusive dealing.

Commissioner Josh Wright agreed that the policy amounted to exclusive dealing, but he concluded that complaint counsel had failed to prove that the exclusive dealing constituted unreasonably exclusionary conduct in violation of Sherman Act Section 2.  Commissioner Wright emphasized that complaint counsel had produced no direct evidence of anticompetitive harm (i.e., an actual increase in prices or decrease in output), even though McWane’s conduct had already run its course.  Indeed, the direct evidence suggested an absence of anticompetitive effect, as McWane’s chief rival, Star, grew in market share at exactly the same rate during and after the time of McWane’s exclusive dealing.

Instead of focusing on direct evidence of competitive effect, complaint counsel pointed to a theoretical anticompetitive harm: that McWane’s exclusive dealing may have usurped so many sales from Star that Star could not achieve minimum efficient scale.  The only evidence as to what constitutes minimum efficient scale in the industry, though, was Star’s self-serving statement that it would have had lower average costs had it operated at a scale sufficient to warrant ownership of its own foundry.  As Commissioner Wright observed, evidence in the record showed that other pipe fitting producers had successfully entered the market and grown market share substantially without owning their own foundry.  Thus, actual market experience seemed to undermine Star’s self-serving testimony.

Commissioner Wright also observed that complaint counsel produced no evidence showing what percentage of McWane’s sales of DIPF might have gone to other sellers absent McWane’s exclusive dealing policy.  Only those “contestable” sales – not all of McWane’s sales to distributors subject to the full support policy – should be deemed foreclosed by McWane’s exclusive dealing.  Complaint counsel also failed to quantify sales made to McWane’s rivals under the generous exceptions to its policy.  These deficiencies prevented complaint counsel from adequately establishing the degree of market foreclosure caused by McWane’s policy – the first (but not last!) step in establishing the alleged anticompetitive harm.

In our amicus brief, we antitrust scholars take Commissioner Wright’s side on these matters.  We also observe that the Commission failed to account for an important procompetitive benefit of McWane’s policy:  it prevented rival DIPF sellers from “cherry-picking” the most popular, highest margin fittings and selling only those at prices that could be lower than McWane’s because the cherry-pickers didn’t bear the costs of producing the full line of fittings.  Such cherry-picking is a form of free-riding because every producer’s fittings are more highly valued if a full line is available.  McWane’s policy prevented the sort of free-riding that would have made its production of a full line uneconomical.

In short, the FTC’s decision made it far too easy to successfully challenge exclusive dealing arrangements, which are usually procompetitive, and calls into question all sorts of procompetitive full-line forcing arrangements.  Hopefully, the Eleventh Circuit will correct the Commission’s mistake.

Other professors signing the brief include:

  • Tom Arthur, Emory Law
  • Roger Blair, Florida Business
  • Don Boudreaux, George Mason Economics (and Café Hayek)
  • Henry Butler, George Mason Law
  • Dan Crane, Michigan Law (and occasional TOTM contributor)
  • Richard Epstein, NYU and Chicago Law
  • Ken Elzinga, Virginia Economics
  • Damien Geradin, George Mason Law
  • Gus Hurwitz, Nebraska Law (and TOTM)
  • Keith Hylton, Boston University Law
  • Geoff Manne, International Center for Law and Economics (and TOTM)
  • Fred McChesney, Miami Law
  • Tom Morgan, George Washington Law
  • Barack Orbach, Arizona Law
  • Bill Page, Florida Law
  • Paul Rubin, Emory Economics (and TOTM)
  • Mike Sykuta, Missouri Economics (and TOTM)
  • Todd Zywicki, George Mason Law (and Volokh Conspiracy)

The brief’s “Summary of Argument” follows the jump. Continue Reading…

Whereas the antitrust rules on a number of once-condemned business practices (e.g., vertical non-price restraints, resale price maintenanceprice squeezes) have become more economically sensible in the last few decades, the law on tying remains an embarrassment.  The sad state of the doctrine is evident in a federal district court’s recent denial of Viacom’s motion to dismiss a tying action by Cablevision.

According to Cablevision’s complaint, Viacom threatened to impose a substantial financial “penalty” (probably by denying a discount) unless Cablevision licensed Viacom’s less popular television programming (the “Suite Networks”) along with its popular “Core Networks” of Nickelodeon, Comedy Central, BET, and MTV.  This arrangement, Cablevision insisted, amounted to a per se illegal tie-in of the Suite Networks to the Core Networks.

Similar tying actions based on cable bundling have failed, and I have previously explained why cable bundling like this is, in fact, efficient.  But putting aside whether  the tie-in at issue here was efficient, the district court’s order is troubling because it illustrates how very unconcerned with efficiency tying doctrine is.

First, the district court rejected–correctly, under ill-founded precedents–Viacom’s argument that Cablevision was required to plead an anticompetitive effect.  It concluded that Cablevision had to allege only four elements: separate tying and tied products, coercion by the seller to force purchase of the tied product along with the tying product, the seller’s possession of market power in the tying product market, and the involvement of a “not insubstantial” dollar volume of commerce in the tied product market.  Once these elements are alleged, the court said,

plaintiffs need not allege, let alone prove, facts addressed to the anticompetitive effects element.  If a plaintiff succeeds in establishing the existence of sufficient market power to create a per se violation, the plaintiff is also relieved of the burden of rebutting any justification the defendant may offer for the tie.

In other words, if a tying plaintiff establishes the four elements listed above, the efficiency of the challenged tie-in is completely irrelevant.  And if a plaintiff merely pleads those four elements, it is entitled to proceed to discovery, which can be crippling for antitrust defendants and often causes them to settle even non-meritorious cases. Given that a great many tie-ins involving the four elements listed above are, in fact, efficient, this is a terrible rule.  It is, however, the law as established in the Supreme Court’s Jefferson Parish decision.  The blame for this silliness therefore rests on that Court, not the district court here.

But the Cablevision order includes a second unfortunate feature for which the district court and the Supreme Court share responsibility.  Having concluded that Cablevision was not required to plead anticompetitive effect, the court went on to say that Cablevision “ha[d], in any event, pleaded facts sufficient to support plausibly an inference of anticompetitive effect.”  Those alleged facts were that Cablevision would have bought content from another seller but for the tie-in:

Cablevision alleges that if it were not forced to carry the Suite Networks, it “would carry other networks on the numerous channel slots that Viacom’s Suite Networks currently occupy.”  (Compl. par. 10.)  Cablevision also alleges that Cablevision would buy other “general programming networks” from Viacom’s competitors absent the tying arrangement.  (Id.)

In other words, the district court reasoned, Cablevision alleged anticompetitive harm merely by pleading that Viacom’s conduct reduced some sales opportunities for its rivals.

But harm to a competitor, standing alone, is not harm to competition.  To establish true anticompetitive harm, Cablevision would have to show that Viacom’s tie-in reduced its rivals’ sales by so much that they lost scale efficiencies so that their average per-unit costs rose.  To make that showing, Cablevision would have to show (or allege, at the motion to dismiss stage) that Viacom’s tying occasioned substantial foreclosure of sales opportunities in the tied product market. “Some” reduction in sales to rivals–while perhaps anticompetitor–is simply not sufficient to show anticompetitive harm.

Because the Supreme Court has emphasized time and again that mere harm to a competitor is not harm to competition, the gaffe here is primarily the district court’s fault.  But at least a little blame should fall on the Supreme Court.  That Court has never precisely specified the potential anticompetitive harm from tying: that a tie-in may enhance market power in the tied or tying product markets if, but only if, it results in substantial foreclosure of sales opportunities in the tied product market.

If the Court were to do so, and were to jettison the silly quasi-per se rule of Jefferson Parish, tying doctrine would be far more defensible.

[NOTE: For a more detailed explanation of why substantial tied market foreclosure is a prerequisite to anticompetitive harm from tie-ins, see my article, Appropriate Liability Rules for Tying and Bundled Discounting, 72 Ohio St. L. J. 909 (2011).]

I share Alden’s disappointment that the Supreme Court did not overrule Basic v. Levinson in Monday’s Halliburton decision.  I’m also surprised by the Court’s ruling.  As I explained in this lengthy post, I expected the Court to alter Basic to require Rule 10b-5 plaintiffs to prove that the complained of misrepresentation occasioned a price effect.  Instead, the Court maintained Basic’s rule that price impact is presumed if the plaintiff proves that the misinformation was public and material and that “the stock traded in an efficient market.”

An upshot of Monday’s decision is that courts adjudicating Rule 10b-5 class actions will continue to face at the outset not the fairly simple question of whether the misstatement at issue moved the relevant stock’s price but instead whether that stock was traded in an “efficient market.”  Focusing on market efficiency—rather than on price impact, ultimately the key question—raises practical difficulties and creates a bit of a paradox.

First, the practical difficulties.  How is a court to know whether the market in which a security is traded is “efficient” (or, given that market efficiency is not a binary matter, “efficient enough”)?  Chief Justice Roberts’ majority opinion suggested this is a simple inquiry, but it’s not.  Courts typically consider a number of factors to assess market efficiency.  According to one famous district court decision (Cammer), the relevant factors are: “(1) the stock’s average weekly trading volume; (2) the number of securities analysts that followed and reported on the stock; (3) the presence of market makers and arbitrageurs; (4) the company’s eligibility to file a Form S-3 Registration Statement; and (5) a cause-and-effect relationship, over time, between unexpected corporate events or financial releases and an immediate response in stock price.”  In re Securities Litig., 430 F.3d 503 (2005).  Other courts have supplemented these Cammer factors with a few others: market capitalization, the bid/ask spread, float, and analyses of autocorrelation.  No one can say, though, how each factor should be assessed (e.g., How many securities analysts must follow the stock? How much autocorrelation is permissible?  How large may the bid-ask spread be?).  Nor is there guidance on how to balance factors when some weigh in favor of efficiency and others don’t.  It’s a crapshoot.

In addition, focusing at the outset on whether the market at issue is efficient creates a market definition paradox in Rule 10b-5 actions.  When courts assess whether the market for a company’s stock is efficient, they assume that “the market” consists of trades in that company’s stock.  This is apparent from the Cammer (and supplementary) factors, all of which are company-specific.  It’s also implicit in portions of the Halliburton majority opinion, such as the observation that the plaintiff “submitted an event study of vari­ous episodes that might have been expected to affect the price of Halliburton’s stock, in order to demonstrate that the market for that stock takes account of material, public information about the company.”  (Emphasis added.)

But the semi-strong version of the Efficient Capital Markets Hypothesis (ECMH), the economic theorem upon which Basic rests, rejects the notion that there is a “market” for a single company’s stock.  Both the semi-strong ECMH and Basic reason that public misinformation is quickly incorporated into the price of securities traded on public exchanges.  Private misinformation, by contrast, usually is not – even when such misinformation results in large trades that significantly alter the quantity demanded or quantity supplied of the relevant stock.  The reason private misinformation is not taken to affect a security’s price, even when it results in substantial changes in quantities demanded or supplied, is because the relevant market is not the stock of that particular company but is instead the universe of stocks offering a similar package of risk and reward.  Because a private misinformation-induced increase in demand for a single company’s stock – even if large relative to the  number of shares outstanding – is likely to be tiny compared to the number of available shares of close substitutes for that company’s stock, private misinformation about a company is unlikely to be reflected in the price of the company’s stock.  Public misinformation, by contrast, affects a stock’s price because it not only changes quantities demanded and supplied but also causes investors to adjust their willingness-to-pay or willingness-to-accept.  Accordingly, both the semi-strong ECMH and Basic assume that only public misinformation can be assured to affect stock prices.  That’s why, as the Halliburton majority observes, there is a presumption of price effect only if the plaintiff proves public misinformation, materiality, and an efficient market.  (For a nice explanation of this idea in the context of a real case, see Judge Easterbrook’s opinion in West v. Prudential Securities.)

The paradox, then, is that Basic and the semi-strong ECMH, in requiring public misinformation, assume that the relevant market is not company specific.  But for purposes of determining whether the “market” is efficient, the market is assumed to consist of trades of a single company’s stock.

The Supreme Court could have avoided both the practical difficulties in assessing market efficiency and the theoretical paradox identified herein had it altered Basic to require plaintiffs to establish not an efficient market but an actual price impact. Alas.