The first thing we do, let's kill the quants!

Cite this Article
Joshua D. Wright, The first thing we do, let's kill the quants!, Truth on the Market (March 02, 2010), https://truthonthemarket.com/2010/03/02/the-first-thing-we-do-lets-kill-the-quants/

Professor Bainbridge has a provocative post up taking on empirical legal scholarship generally.  The While the Professor throws a little bit of a nod toward quantitative work, suggesting it might at least provide some “relevant gist for the analytical mill,” he concludes that “it’s always going to be suspect — and incomplete — in my book.”  Here’s a taste:

And then there’s a recent paper I had to read on executive compensation, whose author should remain nameless. The author found a statistically significant result that he didn’t like. So he threw it under the bus by claiming that his regressions were flawed. Accordingly, he turned to panel data analyses that gave him a result he liked. All the while, another paper on the same topic had found the same results as our author’s regressions. Who was it that said statistics don’t lie?

On top of which, of course, there’s the problem that the number crunchers can only tell you something when they’ve got numbers to crunch. Suppose there was a change in the law in 2000. A before and after comparison might be instructive. But companies weren’t required to disclose the relevant information until 2005. You don’t have anything to measure.

I’m tempted to ask how many legal theory debates have been resolved convincingly by a single paper?  But instead I’ll try to do something more constructive.  At least I hope so. Larry Ribstein chimes in to make the well taken starting point that theory itself is not useful without data.  That is obviously right.  And I think if one were too ask whether the legal literature was suffering from too many theories of too much empirical knowledge — I’d opt for the latter.  But, I’ve got a few different bones to pick.

The first is that even holding regressions and sophisticated quantitative analysis aside for the moment — we’ll come back to it — is that the data should constrain the theory.  In fields that I am familiar with, there is a great deal of legal scholarship that simply ignores the few stylized facts or empirical regularities established by the empirical literature but builds theories and rattles off policy implications.  Of course, the theory of legal theorists rejecting an empirical methodology that restricts their ability  to generate theory willy nilly is not one that can be casually rejected.  I mean, unconstrained theory does sound like more fun doesn’t it?

Second, I read Bainbridge’s post as revealing a common tendency in the legal academy to dismiss empirical evidence if it, alone, is not sufficient to resolve some policy debate.  Empirical evidence is hard to collect and a body of empirical knowledge builds over time.   My sense is that the intuitive gut instinct of law professors is that the role of empirical scholarship is to “prove” assertion X in the way that one might establish the proper interpretation of a contract or statute.  Fads in legal scholarship are another example of this high discount rate in the academy.  For example, I’ve complained before about what I think is the over-use of behavioral law and economics, lack of rigor in drawing out policy implications from the evidence and models, and insufficient attention to empirical data.   There is a premium in the legal literature on striking while the topic is hot, and on over-claiming (and of course, having a catchy paper title) as well, and without expert peer review of claims concerning the relevant empirical literature, perhaps the latter is to be expected.  In any event, the point is that to the extent that “law and social science” disciplines like law and economics want to be taken seriously, and claim the advantages of the ancillary discipline, they cannot simultaneously reject the methodological commitments that come with it — even if those come with the price of things moving a bit slower than the law review (or news) cycle.

Third, paragraphs like Bainbridge’s first make me wonder whether this attitude (not his specifically) about empirical work are informed judgments or just reflexive tendencies to question sophisticated models that are outside our own strike zone?  Knowing nothing about the paper that the Professor is talking about in his example, one can in the abstract think of lots of reasons an empiricist might run a plain vanilla OLS specification as a baseline, report the results, go on to suggest that OLS in this setting as various problems, and move on to some more sophisticated panel approach, and compare the more robust results to contrasting ones in the literature.  Of course, the sort of specification search that Steve hints at could also be going on too.  I’ve got no horse in that race.  But the more general point is that often the critiques of various econometric models by non-econometricians sometimes betray a thinly veiled anti-empirical bias that I think is often dressed up in the language of “omitted variable” bias.  I can vouch for having given dozens of workshops at law schools where no discussion of panel data techniques and description of what exactly fixed effects control for is sufficient to respond to the “but did you control for X, Y and Z” question.

In sum, there is a lot of empirical work out there that is worthy of suspicion.  There is work that it methodologically unsound, suffers from poor and unreliable data or from authors that overclaim.  But there is a lot of really good stuff out there too!  Data is getting cheaper and methods more sophisticated.  Of course, the reduced cost can lead to the types of problems that Steve raises.  But quality problems can and do also occur in doctrinal scholarship pplying other, non-quantitative methodologies to interpret statutes, synthesize cases, make historical claims, or construct theoretical models.  The devil in these arguments is typically in the details no matter what the methodological toolkit.  And rigorous academic discourse is often about identifying and exposing those details to evaluate claims and hopefully, answer questions that can move the literature forward.   I understand that lawyers are going to be suspicious of foreign toolkits, like econometric analysis.  But in my view, the reflexive rejection of empirical work because “its hard to control for everything” is about as persuasive as reflexive rejection of claims that there is some coherent theory of statutory interpretation because “the judge just makes it up anyway, doesn’t he?”  Both might be true on a case by case basis, but I think the bar that legal scholars face in each case is to take seriously the work of others and describe exactly what the problems are and what implications they have for the results.  Let me be absolutely clear that I do not view Professor Bainbridge’s post as committing that error — it IS a blog post after all  — but it did get me thinking about this issue of empirical work and its reception in the broader legal community more generally.