Section 2 Symposium: David Evans–An Economist's View

David Evans —  4 May 2009

evansDavid Evans is Head, Global Competition Policy Practice, LECG; Executive Director, Jevons Institute for Competition Law and Economics, and Visiting Professor, University College London; and Lecturer, University of Chicago.

The treatment of unilateral conduct remains an intellectual and policy mess as we finish out the first decade of the 21st century. There were signs of hope a few years ago. The European Commission embarked on an effort to adopt an effects-based approach to unilateral conduct and to move away from the analytically-empty, object-based approach developed by the European Courts.  Meanwhile the Federal Trade Commission and the U.S. Department of Justice embarked on a series of hearings on unilateral conduct that brought the best thinkers together and hoped to achieve some consensus.  Hopes were dashed in 2008.  The Justice Department and the FTC splintered. The DOJ issued a lengthy report that for all intents and purposes argued for significantly limiting the circumstances under which a business practice could be found to constitute anticompetitive unilateral conduct. Three of the four sitting Federal Trade Commissioners quickly asserted their fundamental disagreement. Towards the end of the year the European Commission finally issued a document that adopted an effects-based approach, sort of, but only for guidance for its prosecutorial discretion over which cases it would focus its resources on.  I say “sort of” because although much of the framework it adopts is quite sensible, the Commission places virtually insurmountable obstacles to considering efficiencies. (For a comparative review of the EC and DOJ reports see my article here.)

The incoherence and discord in this area of antitrust law fundamentally result from a failure on the part of economists and other antitrust scholars to roll up their sleeves and to do significant empirical work. Instead, two kinds of flags get waved to urge the respective followers on. The first involves the famous economic possibility theorem-“it could, therefore it will.”  That’s a statement based on a seemingly scientific economic model that says something might happen under some conditions.  Oftentimes those conditions are unstated, buried in footnotes, or buried in mathematics that only the careful reader uncovers.  (See my article with Jorge Padilla here). Many are guilty of waving possibility theorems around. But in my experience the pro-intervention crowd, including the competition authorities, are the most enamored with untested assumption-driven economic theory. 

The second involves error costs.  These costs can go in either direction, but it is the non-intervention crowd that has cited error costs the most.  The argument is that courts will mistakenly condemn some pro-competitive practices  in the course of condemning anti-competitive ones and that these mistakes will tend to discourage firms in the economy from engaging in pro-competitive behavior-like offering really low prices. This error cost framework and the notion that false positives are highly likely and costly forms the spine of the Justice Department report.  Unfortunately, while one can debate the plausibility of the likelihood of errors and their costs (and that’s a worthwhile exercise), there is essentially no empirical evidence on them.  Error costs are easy to invoke, hard to demonstrate.

If the debate on unilateral practices continues in its current vein it will be resolved by who can scream the loudest, get elected, or appoint the judiciary.  That really isn’t a very satisfactory outcome for two disciplines-law and economics-that pride themselves, in different ways, on uncovering truths.  To make any progress the antitrust profession-and its industrial economics handmaiden-need to place greater value of empirical work and get on with developing fact-based analyses.   As Michael Salinger and I have observed one can learn a lot about business practices by understanding whether and to what extent they are used by competitive firms.  There are many other avenues for empirical research.  The United States provides a useful laboratory: has anticompetitive predation increased significantly since the courts made it almost impossible for a plaintiff to win?  At the same time, divergent antitrust rules (and private enforcement) in the 50 US states provide another remarkable natural laboratory in which to test the efficiency of various practices and the efficacy of various enforcement decisions.  Meanwhile the antitrust profession needs to impose analytical and empirical rigor on the error cost framework.  There is virtually no empirical work on the frequency of errors in antitrust matters or their costs.  At least part of that empirical work needs to come in the form of retrospectives on cases in the US and elsewhere in which plaintiffs or defendants have won.

We have made great progress over the last 50 years in turning antitrust into a rigorous discipline based on theory and evidence. There’s no reason that can’t be extended to unilateral conduct despite the hiccups of 2008.

6 responses to Section 2 Symposium: David Evans–An Economist's View

    Michael Salinger 4 May 2009 at 4:00 pm

    It’s harder than most people realize for the agencies to gather the relevant evidence. This is true even for the FTC, which has a statutory mandate to do studies. The hearings were an open invitation to companies to explain about false positives. They did not take the opportunity. Trying to compel information on something as open-ended as actions foregone for fear of antitrust liability would be fruitless.

    The student who compared Illinois and Iowa provided useful evidence. Like much evidence, it may be subject to a variety of interpretations. I am confident, though, that it was a more important contribution to knowledge than 99% of the dissertations with estimates of BLP models. The thin academic rewards for establishing facts rather than theorems is a big part of the problem.

    One of the subjects covered in the FTC at 100 hearings was how to get the relevant research done. My position was that most of it has to occur outside the agencies, but that a challenge is overcoming a relative academic disinterest in research that is relevant to policy. The agencies should try to prime the pump by sponsoring conferences with enough of a lead time that it would generate new research.

    Howard P. Marvel 4 May 2009 at 3:00 pm

    So everybody agrees we need more empirical work. Why don’t we have it? David Evans says that we could look for evidence that the difficulty of proving predation either has or has not contributed to more predatory behavior. But it’s very tough to prove a negative, and anyway, more than what? Do we have fewer low cost air carriers than we “should” have? De we have fewer low cost carriers than Europe, where they care more about abuse of dominance, when they aren’t worrying about the fate of state-owned carriers? How will I know predation, empirically, when I see it? Does it matter if when I see it, it doesn’t work, a la Genesove and Mullin on the Sugar Trust? If nobody bothers to complain about predation, does that mean there is none?

    Josh Wright says we could use interstate comparisons where regulatory environments differ. That’s not so easy, either. We are about to get a comparison when the Maryland Leegin-repealer goes into force, but that comparison is really tricky (see my 1986 JPE article for why). I had a student who tried to use a change in Iowa’s franchise regulation that made termination tougher. He tracked down store ownership before and a decade after the change for restaurants in the Quad Cities–two in Iowa and two across the Mississippi in Illinois. After the change, almost all the Iowa stores were company-owned, while the Illinois units continued to be operated predominately by franchisees. But this perverse, if predictable, result was more like a repeated case study than a statistical test. Another student has looked at the effect in Indiana of the end of that state’s ban on exclusive territories for beer distribution. The problem is that beer is heavily regulated. Limited permits in Indiana have prevented the expansion of sales by smaller outlets that I had predicted. As with predation and termination studies, the comparisons we can find are typically so far from clean that it is tough to impossible to draw compelling conclusions.

    It is easy for me to note that it’s easier to call for empirical work than it is to provide it. But since the agencies are probably best positioned to obtain the data for such studies, a most positive recommendation is that they should think seriously about filling the gap that Evans and Hylton and Wright all agree is there.


    I would hope that all would agree that more analytical rigor and empirical evidence to shed light on sensible application of the error-cost framework would be a favorable development. Let me add two minor points.

    The first is that one (in my view largely untapped) source of data to inform these policy questions is interstate variation in treatment of various business practices under state-specific franchising statutes (for example, prohibition of exclusive dealing contracts) or state antitrust laws. While some work in this area has been done, I think this may be an area where there is some low hanging fruit for economists and empirical law and economics scholars to contribute.

    The second minor point is in agreement with Keith’s highlighting of the fact that we ought to not think just about how to empirically strengthen the error cost framework in order to harness its analytical strength to the benefit of consumers, but also about the right defaults conditional upon current existing evidence.

    One of the points that emerges from the Section 2 Report and the FTC dissenting statement is that there is an apparently widespread and growing view that we need not think about false positives any more (at least not like we used to) because antitrust is different now than it was in the 1980s when Frank Easterbrook wrote Limits of Antitrust. I’m surprised by these arguments. And they come from high places. But they should unpersuasive largely (there are other reasons too) because they rely on plaintiff win rates to say something about the incidence of errors but nothing about the prospect of judicial error in identifying illegal conduct chilling pro-competitive behavior.

    In Michael Salinger’s kick off post to this symposium, he noted that this would have been a great contribution of the Report, i.e. learning more about false positives and their relative incidence and magnitude. It remains a great research program for antitrust economists, but I concur that the Report did not deliver on this front.

    Keith Hylton 4 May 2009 at 12:17 pm

    I agree, but here’s a minor quibble. I’m in favor of more empirical work. But for the present, we should stick with presumptions that make sense — e.g., entry constrains pricing, which constrains false conviction costs in some settings at least. Otherwise, if we pretend to be agnostic about error costs in the absence empirical evidence, then we may put too much weight on some counterintuitive empirical result that appears in one article — like the Card study on the employment effects of the minimum wage.
    The error cost arguments appear to favor a less interventionist approach, but the Trenton Potteries argument for the per se rule is also an error cost argument. Error cost arguments are based on sensible empirical presumptions which should not be overturned lightly.

Trackbacks and Pingbacks:

  1. TRUTH ON THE MARKET » Section 2 Symposium: Josh Wright on An Evidence Based Approach to Exclusive Dealing and Loyalty Discounts - September 17, 2009

    […] on Section 2 Symposium: Michael Salinger on Error Costs and the Case for Conduct-Specific Standards.TRUTH ON THE MARKET » Section 2 Symposium: Welcome to Day 2 on Section 2 and the Section 2 Rep…TRUTH ON THE MARKET » Section 2 Symposium: Welcome to Day 2 on Section 2 and the Section 2 […]

  2. TRUTH ON THE MARKET » Section 2 Symposium: Welcome to Day 2 on Section 2 and the Section 2 Report–The Debate over General Standards - September 17, 2009

    […] David Evans (LECG, UCL, University of Chicago) offered the first of three perspectives on the Section 2 Report from an economic perspective, emphasizing the promise of an “evidence based” approach to antitrust and laying down the gauntlet to the economics profession to take the challenge to do the rigorous empirical work necessary to fuel the error cost approach. […]