Archives For section 2 symposium

lambertThom Lambert is an Associate Professor of Law at University of Missouri Law School and a blogger at Truth on the Market.

A bundled discount occurs when a seller offers to sell a collection of different goods for a lower price than the aggregate price for which it would sell the constituent products individually. Such discounts pose different competitive risks than single-product discounts because, as I explained in this post, they may have an exclusionary effect even if they result in a price that exceeds the cost of producing the bundle. In particular, even an “above-cost” bundled discount may have the effect of excluding rivals that (1) are more efficient at producing the products that compete with the discounter’s but (2) produce a less extensive product line than the discounter. In other words, bundled discounts may drive equally efficient but less diversified rivals from the market.

Given that they are a “mixed bag” practice (some immediate benefits, some potential anticompetitive harms) and pose risks beyond those presented by straightforward predatory pricing, courts and commentators have struggled to articulate a legal standard that would prevent unreasonably exclusionary bundled discounts without chilling procompetitive bundling. With the notable exception of the en banc Third Circuit’s LePage’s decision, which is essentially standardless, most of the approaches courts and commentators have articulated for evaluating bundled discounts have involved some sort of test that compares prices and costs. Chapter 6 of the Department of Justice’s Section 2 Report explains the various “price-cost” tests in detail.

Based on the presentations in the Section 2 hearings, the Department reached essentially four conclusions concerning bundled discounts:

Continue Reading…

hovenkampHerbert Hovenkamp is Professor of Law at The University of Iowa College of Law.

The baseline for testing predatory pricing in the Section 2 Report is average avoidable cost (AAC), together with recoupment as a structural test (Report, p. 65). The AAC test or reasonably close variations, such as average variable cost or short-run marginal cost, seems about right. However, differences among them can become very technical and fine. The Report correctly includes in AAC those fixed costs that “were incurred only because of the predatory strategy, for example, as a result of expanding capacity to enable the predatory sales.” (Report, pp. xiv, 64-65) Such a strategy would make some sense for a predator if the fixed costs in question are easily re-deployed once the predation has succeeded – for example, in the case of an airline whose planes can be shifted to a different route. The test virtually guarantees that in industries that require heavy investment in production capacity that cannot be redployed the test will approach strict average variable cost. In cases where fixed costs are relatively high, an investment of this nature that lasted only through the predatory period and became excess capacity thereafter would not be worth it. Further, if fixed costs are low the market is almost certainly not prone to monopoly to begin with. AVC is probably underdeterrent, but it is also probably the best we can do without chilling procompetitive behavior.

However, when prices are under AVC, then a strict recoupment requirement (see Report, pp. 67-68) is unnecessarily harsh. Proving recoupment requires a prediction about the dominant firm’s prices, costs, and output over a defined future period, which in turn requires a prediction about when new entry will occur, how many firms will enter, and their growth rates. As a result recoupment is much too difficult to prove and does not serve to distinguish aggressive promotional price cuts from those that are anticompetitive. Rather, structural proof should consist of those things that are ordinarily required in a Section 2 case; namely, a dominant share of a properly defined relevant market and high entry barriers. That is, the question should be “is durable monopoly pricing in this market possible,” but not “can predation be predicted to yield a durable period of monopoly pricing with sufficient monopoly returns to pay off the investment in predation.” As a factual matter the former requirement is much more manageable and requires far less speculation. An important additional ingredient is causation in the classical tort sense – namely, can the plaintiff show that the prices below average variable cost were of sufficient magnitude and duration to cause its exit from the market? Sporadic or episodic price drops below AVC are unlikely to meet this requirement.

The biggest concern is with false positives. Are there cases in which prices were below AVC for a substantial length of time to meet the causation requirement and the structural components for monopoly were present, but where we would not want to condemn the conduct because dollars-and-cents proof of recoupment is not possible. I doubt it.

Continue Reading…

kobayashiBruce Kobayashi is a Professor of Law at George Mason Law School.

Dimming the Court’s Brooke Group Bright Line Administrable Rule?

As noted in my earlier post, the Supreme Court’s Brooke Group rule is held out as the primary example of an administrable bright line rule aimed at controlling the costs of type I error. In practice, the Brooke Group above cost rule is not as bright as one might wish. The Achilles heel of the Brooke Group cost based rule is the failure to clarify what the relevant cost is.

Chapter 4 of the Section 2 report for the most part does a nice job of setting out the leading alternatives. The report notes that there is a broad consensus that prices above Average Total Cost (ATC) should be per se legal (Section 2 Report at 61). The report also discusses the measures preferred by Areeda-Turner, Marginal Cost (MC), and Average Variable Cost (AVC) as an administrable proxy for MC, and criticisms of these measures. The Report endorses Average Avoidable Costs (AAC), which includes both the variable and non sunk product-specific fixed costs of producing the incremental output as the preferred measure. AAC is the preferred measure because it correctly measures the avoidable cost of producing the incremental predatory output.

While Chapter 4 of the Section 2 Report provides a very useful review and analysis of the alternative cost measures, I thought the discussion simplified away several important issues. The nature of problems created as a result of this oversimplification can be illustrated by considering the numerical example used to illustrate the difference between the various cost measures (Section 2 report at 64). One problem is that the constant cost of producing the incremental output is constant, so that the AAC is equal to MC. Thus, the example fails to clearly illustrate the differences between the Areeda Turner preferred measure (MC) and the DOJ’s preferred measure (AAC). Moreover, the example suppresses several other important issues. For example, it measures the incremental output relative to the predating firm’s pre-entry output instead of measuring it relative to the but-for post entry output. The simplicity of the example has the advantage of being easier to understand, but it suppresses issues that make use of the AAC measure less administrable than an accounting measure such as AVC.

A more serious issue is the Report’s failure to clearly address the opportunity cost issue, a critical issue in the recent airline cases (U.S. v AMR Corp., 355 F.3d. 1109 (10 Cir. 2003) and Spirit Airlines v. Northwest Airlines, 431 F.3d. 917 (6th Cir. 2005)). In AMR, the DOJ’s position was that when an airplane is shifted from a profitable route (Route S) to expand capacity in the alleged predation route (Route P), avoidable costs for Route P should include the forgone profits from Route S as an opportunity cost. These costs would be added to the flight costs (cost of fuel, crew, passenger costs, etc.). The AMR court rejected inclusion of such forgone profits, but the Spirit court accepted forgone revenues as part of the incremental costs of expanding output (see Areeda & Hovenkamp, 2006 304-11). Continue Reading…

manne

Geoffrey Manne is Director, Global Public Policy at LECG and a Lecturer in Law at Lewis & Clark Law School. He is a founder of Truth on the Market.

wright

Josh Wright is Assistant Professor at George Mason University School of Law and a former Scholar in Residence at the FTC. He blogs regularly at Truth on the Market.

Welcome to the third and final day of the TOTM Symposium on Section 2 and the Section 2 Report.

Yesterday we had a great series of posts discussing the difficult question of whether Section 2 should be governed by a general standard or rather by conduct-specific standards, as well as the merits of the Section 2 Report’s endorsement of the “substantial disproportionality” test. For convenience, yesterday’s posts are linked here:

Today we’ll turn our attention from the debate over general standards to more specific substantive issues throughout the Report ranging from the Report’s proposed treatment of practices like predatory pricing, bundled discounts and exclusive dealing to issues including defining monopoly power, crafting effective monopolization remedies, the intersection of monopolization law and intellectual property, and the implications of the Section 2 Report for international antitrust.  Today we’ll hear from (in some cases more than once):

  • Bruce Kobayashi (George Mason) on predatory pricing
  • Herbert Hovenkamp (Iowa Law) on predatory pricing and bundled discounts
  • Thom Lambert (Missouri Law) on bundled discounts
  • Dan Crane (Cardozo/Michigan) on bundled discounts
  • Howard Marvel (Ohio State) on exclusive dealing
  • Josh Wright (George Mason) on exclusive dealing and loyalty discounts
  • Tim Brennan (UMBC) on distinguishing predation from exclusion
  • Bill Kolasky (WilmerHale) on monopoly power
  • Herbert Hovenkamp (Iowa Law) on patents and exclusionary conduct
  • Tim Brennan (UMBC) on the relationship between regulation and antitrust
  • Bill Page (Florida) on monopolization remedies
  • Alden Abbott (FTC) provides an international perspective on single firm conduct

We’re looking forward to hearing from the participants, and hope you’ll join in the comments.

kobayashiBruce Kobayashi is a Professor of Law at George Mason Law School.

One of the most important changes in the antitrust laws over the past 40 years has been the diminished reliance of rules of per se illegality in favor of a rule of reason analysis. With the Court’s recent rulings in Leegin (eliminating per se rule for minimum RPM) and Independent Ink (eliminating the per se rule against intellectual property tying), the evolution of the antitrust laws has left only tying (under a “modified” per se rule) and horizontal price fixing under per se rules of illegality. This movement reflects advances in law and economics that recognize that vertical restraints, once condemned as per se illegal when used by firms with antitrust market power, can be procompetitive. It also reflects the judgment that declaring such practices pre se illegal produced high type I error costs (the false condemnation and deterrence of pro competitive practices).

The widespread use of the rule of reason can be problematic, however, because of the inability of antitrust agencies and courts to reliably differentiate between pro and anticompetitive conduct. Conduct analyzed under Section 2 often has the potential to generate efficiencies and be anticompetitive, and finding a way to reliably differentiate between the two has been described as “one of the most vexing questions in antitrust law” (Section 2 Report, p. 12). Under these conditions, applying a rule of reason analysis on a case by case basis may not substantially reduce error costs and can drastically increase the costs of enforcement. Thus, under the decision theory framework widely used by economists and courts, which teaches that optimal legal standards should minimize the sum of error costs and enforcement costs, “bright line” per se rules of legality and illegality can dominate more nuanced but error prone standards under the rule of reason. Continue Reading…

pageWilliam Page is a Marshall M. Criser Eminent Scholar in Electronic Communications and Administrative Law at the University of Florida, Levin College of Law.

The DOJ’s § 2 Report offers two recommendations under the heading of “General Standards for Exclusionary Conduct.” First, for evaluating alleged acts of exclusion, the Report endorses the burden-shifting framework of the D.C. Circuit’s 2001 Microsoft decision. Second, after canvassing various standards of anticompetitive effect, the Report settles on the “disproportionality test,” under which “conduct that potentially has both procompetitive and anticompetitive effects is anticompetitive under section 2 if its likely anticompetitive harms substantially outweigh its likely procompetitive benefits.”

In this post, I’d like to comment on these recommendations by recalling how the D.C. Circuit applied its burden-shifting approach in Microsoft. In doing so, I draw on The Microsoft Case: Antitrust, High Technology, and Consumer Welfare (Chicago 2007), which I wrote with John Lopatka of Penn State.

Under the D.C. Circuit’s burden-shifting approach, the plaintiff is first required to show that the defendant’s conduct harmed not only competitors but the “competitive process and [therefore] consumers.” If the plaintiff does so, the defendant must offer a procompetitive justification for the conduct, that is, “a nonpretextual claim that its conduct is indeed a form of competition on the merits because it involves, for example, greater efficiency or enhanced consumer appeal.” If the defendant produces a justification, the plaintiff is required either to refute it or to prove that the anticompetitive harm outweighs any benefit. Continue Reading…

salinger

Michael A. Salinger is a managing director in LECG’s Cambridge office and a professor of economics at the Boston University School of Management, where he has served as chairman of the department of finance and economics. He is a former Director of the Bureau of Economics at the FTC.

The source of much of the disagreement between the Antitrust Division and the FTC is based on chapter 3, which discusses general standards for Section 2 liability.

A major portion of chapter 3 concerns whether there is a unifying principle underlying appropriate doctrine for all behavior challenged under Section 2. A substantial portion of the chapter is devoted to specific proposals: “effects balancing,” “no economic sense,” “profit sacrifice,” “equally-efficient competitor,” and “disproportionality” tests. I found most of the discussion in chapter 3 to be quite sensible. The problems it cites with the effects balancing test are well-founded. It would be a great test for the incomes of consulting economists, but it requires more precision in economic analysis than the current state of the art can deliver and will likely lead to errors in both directions. The discussion of the profit-sacrifice and no-economic- sense tests helps clarify the distinction between the two. A mere profit sacrifice test is too loose a liability standard. A no-economic sense test is sometimes useful, but it should not be the universal standard for Section 2 liability. The discussion of the equally-efficient competitor test was balanced and useful.

The most controversial part of the chapter is the discussion of the disproportionality test and in particular the conclusion that, even though the Department does not believe that there is a preferred test, the disproportionality test is its preferred test. I am puzzled by that conclusion. It is at odds with the ultimate conclusion that different kinds of conduct warrant different tests because of differences in the costs of false positives and false negatives.

The standard for predatory pricing established in Matsushita and Brooke Group is, in effect, a “no economic sense” test. Pricing below the relevant notion of cost is behavior that qualitatively makes no sense unless it drives out rivals. (Of course, they also include the requirement that the exclusionary hypothesis makes economic sense.) Aspen Ski relies on “no economic sense” logic. (Exclusion was the only plausible reason that Aspen would not sell lift tickets to Aspen Highlands on the same terms as it sold them to the general public.) In my opinion, predatory pricing and refusals to deal are both classes of conduct for which we should be more concerned with false positives than false negatives.

One of the standard criticisms of the no economic sense test is that $1 of efficiencies can get a company off the hook for behavior that generates $100 of competitive harm. The disproportionality test addresses that criticism. A disproportionality test may be a better conceptual standard than “no economic sense” for refusals to deal (and perhaps Aspen Ski is more accurately characterized as a disproporitionality test), but the difference between the two is a marginal adjustment. Substituting a disproportionality test (which one can think of as a “little economic sense” test) for a “no economic sense” test does not address the primary concern with these tests as general standards. There might be other classes of behavior (like bundled discounts) where the relative concern with false positives and false negatives dictates a standard which trades off those two risks much differently.

The DOJ embrace of the disproportionality test reflects a greater concern with false positives than false negatives for all behavior subject to challenge under Section 2. I agree with the objection Commissioners Harbour, Leibowitz, and Rosch raised with respect to this aspect of the DOJ report. Of course, other aspects of their statement were regrettable. First, there is the rhetoric (“a blueprint for radically weakened enforcement”). And who are all the “stakeholders?” The historical position of the FTC has been consumers are the only proper stakeholders in antitrust. What other stakeholders did they have in mind? Is the antitrust bar (or the plaintiff’s side of it) a proper stakeholder to consider? And their discussion of the individual topics failed to reflect a nuanced, decision-theoretic analysis of which practices require standards that tolerate false negatives to avoid false positives and which require standards where a risk of false positives should be tolerated.

The final conclusion in chapter 3 of the DOJ report is that different classes of conduct require different standards based on differences in the relative risk of different types of errors. I think that is the right conclusion and that is what should frame the entire debate.

hyltonKeith Hylton is a Professor of Law at Boston University School of Law.  [Eds – This post originally appeared on Day 1 of the Symposium, but we are re-publishing it today because it bears directly on today’s debate over general standards]

The “error cost” or “decision theory” approach to Section 2 legal standards emphasizes the probabilities and costs of errors in monopolization decisions.  Two types of error, and two associated types of cost are examined.  One type of error is that of a false acquittal, or false negative.  The other type of error is that of a false conviction, or false positive.  Under the error cost approach to legal standards, a legal standard should be chosen that minimizes the total expected costs of errors.

Suppose, for example, the legal decision maker has a choice between two legal standards, A and B.  Suppose under standard A, the probability of a false acquittal is 1/4 and the probability of a false conviction is 1/5.  Under standard B, the probability of false acquittal is 1/5 and the probability of a false conviction is 1/4.  Suppose the cost of a false acquittal is $1 and the cost of a false conviction is $2.  The expected error cost of standard A is therefore .25 + (.2)($2) = $.65.  The expected error cost of standard B is .2 + (.25)($2) = $.70.  Since the expected error cost of standard B is greater than that of standard A, standard A should be preferred.

In monopolization law there are several legal standards that have been applied by courts and proposed by commentators, such as the balancing test, the specific intent test, the profit sacrifice test, the disproportionality test, the equally efficient competitor test, no-economic-sense test, and others.  Almost all of the tests can be grouped under the alternative categories of balancing or non-balancing tests.  Under the error cost approach, the ideal legal standard for any given area of monopolization law is the one that generates the smallest expected error cost.  Moreover, each of these tests has been proposed as a default rule to be applied across the board, but which can be abandoned in a specific case that merits an alternative standard.

The Department of Justice’s recent Section 2 Report reviews the various monopolization standards and embraces the disproportionality test as the best default rule.  The disproportionality test holds the defendant liable under Section 2 only when the anticompetitive effects of his conduct are disproportionate in light of the precompetitive benefits.  This is an approach that makes sense if one adopts the view, as did the authors of the DOJ report, that the costs of false convictions under monopolization law are larger than the costs of false acquittals.  The disproportionality test is quite close in application to the specific intent test, the no-economic-sense test, and one version of the profit sacrifice test.

Although it is ultimately an empirical question, there are several reasons to adopt the presumption that false conviction costs are greater than false acquittal costs in the monopolization context.  Two of the most persuasive reasons are based on the incentives for entry and for rent-seeking.  The costs of false acquittals in the monopolization setting can be kept in check through the threat of competitive entry.  The costs of false convictions, on the other hand, generate rent seeking incentives to file suit under Section 2 on the part of firms that compete against dominant firms.  Another important reason for the presumption is the asymmetric impact of errors.  False acquittals permit one firm, the falsely-acquitted defendant, to continue practices that harm consumers.  False convictions overdeter dominant firms in general, and can lead to a form of soft competition which is especially harmful to consumers.

One of the key purposes of error cost analysis is to serve as bridge between economic theory and legal standards in antitrust.  Economic models often assume courts can implement legal tests with perfect accuracy.  But this is not always true.  The accuracy of a balancing test that requires courts to distinguish vigorous competition from predation will depend on the quality of judges, juries, lawyers, and the procedural mechanisms in place for conducting a trial.  Even a small risk of error leading to a possible multibillion dollar trebled judgment can lead a firm that has not engaged in anticompetitive conduct to alter its conduct to avoid the risk of an antitrust lawsuit.  For economic theory to lead to useful recommendations for antitrust courts, analysts must consider the likelihood of error and the costs of error under proposed monopolization tests.  Error cost analysis provides a framework for courts to screen and assign “value weights” to the recommendations from economic analysis.

kolaskyWilliam Kolasky is a partner in WilmerHale’s Regulatory and Government Affairs Department, a member of the firm’s Antitrust and Competition Practice Group, and a former Deputy Assistant Attorney General in the Antitrust Division at the Department of Justice.

The most controversial part of the Justice Department’s Single Firm Conduct Report is the Department’s proposed use of what it terms a “substantial disproportionality” test for exclusionary conduct. Under this test, the Justice Department would bring a case only if the harm to consumers and competition caused by a dominant or near-dominant firm’s conduct is “substantially disproportionate” to any legitimate benefits the firm might realize. The Department argues that this test is superior to the three alternative tests it considers—an effects-balancing test, a no-economic-sense test, and an equally-efficient-competitor test—because it is more administrable and because it reduces the risk of false positives (i.e., finding conduct unlawful that does not harm competition ), which the Department views as more serious than that of false negatives (i.e., finding conduct lawful that does harm competition).

Critics of the Department’s report argue that this test places a finger on the scale in favor of monopolists and near-monopolists, leaving consumers and smaller competitors with too little protection, and it is certainly easy to see why the test gives rise to this perception. But there is another, more fundamental problem with the Justice Department’s proposed test – namely, that it perpetuates the outdated view of the rule of reason as an ad hoc balancing test, and does not take into account the extent to which the Supreme Court and the lower courts have given greater structure to rule of reason analysis over the last thirty years. Continue Reading…

lambertThom Lambert is an Associate Professor of Law at University of Missouri Law School and a blogger at Truth on the Market.

There’s a fundamental problem with Section 2 of the Sherman Act: nobody really knows what it means. More specifically, we don’t have a very precise definition for “exclusionary conduct,” the second element of a Section 2 claim. The classic definition from the Supreme Court’s Grinnell decision — “the willful acquisition or maintenance of [monopoly] power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident” — provides little guidance. The same goes for vacuous statements that exclusionary conduct is something besides “competition on the merits.” Accordingly, a generalized test for exclusionary conduct has become a sort of Holy Grail for antitrust scholars and regulators.

In its controversial Section 2 Report, the Department of Justice considered four proposed general tests for unreasonably exclusionary conduct: the so-called “effects-balancing,” “profit-sacrifice/no-economic-sense,” “equally efficient competitor,” and “disproportionality” tests. While the Department concluded that conduct-specific tests and safe harbors (e.g., the Brooke Group test for predatory pricing) provide the best means of determining when conduct is unreasonably exclusionary, it did endorse the disproportionality test for novel business practices for which “a conduct-specific test is not applicable.” Under the disproportionality test, “conduct that potentially has both procompetitive and anticompetitive effects is anticompetitive under section 2 if its likely anticompetitive harms substantially outweigh its likely procompetitive benefits.”

According to the Department, the disproportionality test satisfies several criteria that should guide selection of a generalized test for exclusionary conduct. It is focused on protecting competition, not competitors. Because it precludes liability based on close balances of pro- and anticompetitive effects, it is easy for courts and regulators to administer and provides clear guidance to business planners. And it properly accounts for decision theory, recognizing that the costs of false positives in this area likely exceed the costs of false negatives.

While it has some laudable properties (most notably, its concern about overdeterrence), the disproportionality test is unsatisfying as a general test for exclusionary conduct because it is somewhat circular. In order to engage in the required balancing of pro- and anticompetitive effects, one needs to know which effects are, in fact, anticompetitive. As the Department correctly noted, the mere fact that a practice disadvantages or even excludes a competitor does not make that practice anticompetitive. For example, lowering one’s prices from supracompetitive levels or enhancing the quality of one’s product will usurp business from one’s rivals. Yet we’d never say such competitor-disadvantaging practices are anticompetitive, and the loss of business to rivals should not be deemed an anticompetitive effect of the practices.

“Anticompetitive” harm presumably means harm to competition. We know that that involves something other than harm to individual competitors. But what exactly does it mean? If Acme Inc. offers a bundled discount that results in a bundle price that is above the aggregate cost of the products in the bundle but cannot be met by a less diversified rival, is that a harm to competition or just a harm to the less diversified competitor? If Acme pays a loyalty rebate that results in an above-cost price for its own product but usurps so much business from rivals that they fall below minimum efficient scale and thus face higher per-unit costs, is that harm to competition or to a competitor? These are precisely the sorts of hard (and somewhat novel) cases in which we need a generalized test for exclusionary conduct. Unfortunately, they are also the sorts of cases in which the Department’s proposed disproportionality test is unhelpful.
Continue Reading…

manne

Geoffrey Manne is Director, Global Public Policy at LECG and a Lecturer in Law at Lewis & Clark Law School.  He is a founder of Truth on the Market.

wright

Josh Wright is Assistant Professor at George Mason University School of Law and a former Scholar in Residence at the FTC.  He blogs regularly at Truth on the Market.

Welcome to the second day of the TOTM Symposium on Section 2 and the Section 2 Report. Yesterday, we started the symposium with a variety of perspectives on the Section 2 Report, the process under which it was created, the subsequent inter-agency debates, and the future of Section 2 enforcement.  We’ve collected links to the posts from the first day here:

  • Tad Lipsky (Latham and Watkins) kicked things off by asking whether the FTC dissent from the Section 2 Report and the financial crisis would lead to a reduced role of economics in antitrust along with a dramatically more aggressive monopolization agenda?  (“Does the HLR Statement — as well as new AAG Christine Varney’s vow to “rebalance legal and economic theories” in antitrust — portend future government actions against unilateral conduct that would fail to pass through the “economic sense” screen?”)
  • Michael Salinger (Boston University/LECG) wrote an incisive post about the need for–and difficulty of–employing an error cost framework in Section 2 analysis.  It is worth repeating the admonition in his final paragraph here, perhaps to fuel the next two days’ comments:
  • Absent objective measures of the necessary inputs into the decision analysis of antitrust standards, positions necessarily rest on subjective estimates.  In this symposium, I believe it would be useful for commenters to articulate as best they can what they believe about the frequency of pro- and anti-competitive uses of practices, the benefits from the competitive uses and costs of the anticompetitive uses, and the quality of the available screens to distinguish between the competing hypotheses.  It would also be useful to examine what the foundations of those beliefs are.  In particular, to what extent are the beliefs based on evidence and to what extent are they based on the plausibility of underlying theory.

  • Dan Crane (headed to Michigan Law) noted that while the failure to issue a consensus report on Section 2 (even if somewhat watered down on hot button issues) was a missed opportunity to provide some much needed instructional value to courts on substantive monopolization issues percolating in the lower courts, perhaps the greatest failure of the hearings was “that the agencies were not able to speak as one voice on the rules that should govern private monopolization lawsuits, an issue on which the agencies do not have a direct stake and hence could have served as an ‘honest broker.’”
  • Alden Abbott (FTC) then presented us with the insider view of the Section 2 Hearings and the Report process, noting that the apparent disagreement between the agencies is part and parcel of an expansive international debate over the appropriate approach to single firm conduct, and reminding us that (as other participants noted, as well), the Report represents several years of work by staff members from both agencies, and is, in fact, part of an ongoing, collaborative process.
  • David Evans (LECG, UCL, University of Chicago) offered the first of three perspectives on the Section 2 Report from an economic perspective, emphasizing the promise of an “evidence based” approach to antitrust and laying down the gauntlet to the economics profession to take the challenge to do the rigorous empirical work necessary to fuel the error cost approach.
  • Keith Hylton (Boston University) next succinctly set out the essential defense of an error cost approach that favors avoiding false positives:
  • Although it is ultimately an empirical question, there are several reasons to adopt the presumption that false conviction costs are greater than false acquittal costs in the monopolization context.  Two of the most persuasive reasons are based on the incentives for entry and for rent-seeking.  The costs of false acquittals in the monopolization setting can be kept in check through the threat of competitive entry.  The costs of false convictions, on the other hand, generate rent seeking incentives to file suit under Section 2 on the part of firms that compete against dominant firms.  Another important reason for the presumption is the asymmetric impact of errors.  False acquittals permit one firm, the falsely-acquitted defendant, to continue practices that harm consumers.  False convictions overdeter dominant firms in general, and can lead to a form of soft competition which is especially harmful to consumers.

  • Howard Marvel (Ohio State University) finished things off for our trio of economists by considering the implications of the financial crisis for the Section 2 debates and antitrust enforcement more generally, criticizing some recent calls from overseas and the United States to incorporate the “too big to fail” notion into modern antitrust analysis as harkening back to the days of some of antitrust’s most discredited and counterproductive ideas.

Today we continue our symposium with a series of posts and, we hope, an engaged discussion over the thorny question of a general standard for Section 2 analysis.  Today we’ll hear from:

  • Thom Lambert (University of Missouri Law)
  • Bill Kolasky (WilmerHale)
  • Keith Hylton (Boston University Law)
  • Michael Salinger (Boston University Business and LECG)
  • Bill Page (University of Florida Law)
  • Bruce Kobayashi (George Mason Law)

We hope you’ll join us, and join in the comments.

marvelHoward P. Marvel is Professor of Economics in the Department of Economics and Professor of Law in the Moritz College of Law, both at The Ohio State University.

In the wake of Bork and Posner, and Baxter and the Reagan Revolution, a consensus emerged that big could be bad, but the harm that dominant firms could do needed to be demonstrated, not simply assumed in consequence of their sheer size. Moreover, the demonstration required harm to competition. The consensus held through the Clinton Administration, buoyed by the talented economists that it attracted. The Section 2 Report is controversial in drawing lines about where harm to competition begins, but it is not hard to imagine all sides of the debate agreeing with this from the report: “Competition is ill-served by insisting that firms pull their competitive punches so as to avoid the degree of marketplace success that gives them monopoly power or by demanding that winning firms, once they achieve such power, ‘lie down and play dead.’ ” (Report, p.8)

Or at least it was not difficult to limit the ground rules under which the debate would take place. But times have changed. We now worry not just about whether dominant firms abuse their positions.  Add the worry that they are too big to fail. Nobody appears concerned about GM preying upon its rivals, or even whether its new owners-you, me and the UAW-might be tempted to run the company in an anticompetitive fashion backed by Federal power. But we might have preferred to see Chevy, Cadillac, Saturn, and the rest fold asynchronously, permitting us to stand on the sidelines as the calamity proceeded slowly enough to allow each succeeding failure to be digested separately.  Who could break up the behemoths into little pieces? Calling Senator Sherman. Continue Reading…