Brian Leiter brings the always fun “what’s wrong with empirical legal studies” meme back to the front page. Professor Leiter’s post is a really good one. He sets up the “problem” with ELS as such:
There is now too much empirical work being done simply because it looks ‘empirical.’
Professor Bainbridge agrees. And this isn’t Bainbridge’s first rodeo. Here is his earlier call to “reject the quants and their methods.” One suspects that where Professors Bainbridge and Leiter find room to agree — there must be something there. Leiter’s post lays out the issues quite nicely and there is, indeed, quite a bit to agree with. Here’s how he frames the problem of oversupply of low quality empirical legal scholarship:
First, too much of the work is driven by the existence of a data set, rather than an intellectual or analytical point. But the existence of a data set then permits a display of technical skills, which is satisfying to those with a technical fetish. But for everyone else, the question remains: why does this matter? why should one care? and so on.
Second, the analytical- and discursive-skill level of ELS scholars appears to be, on average, low, or at least lower than the typical law & economics or law & philosophy interdisciplinary scholar of yesteryear. This isn’t surprising, given that the genre rewards technical skills related to number crunching and data analysis, as well as research design, rather than smarts on your feet, the ability to draw conceptual distinctions, or construct and deconstruct arguments. But the latter intellectual skills are the ones needed in law, both in thinking about law and in teaching law, not the former. Perhaps this is also why discussion of empirical papers typically follows the same tedious pattern of wondering how one controls for this-or-that variable, with the presenter showing, cleverly, how s/he already controlled for it, or admitting that s/he didn’t, so that this is an issue for future work, etc.
And Leiter’s addendum:
the only claim here is that a disproportionate amount [of mediocre work] is travelling these days under the rubric of ELS, and that it suffers from the flaws noted.
A few thoughts. Well, more like seven.
First, Leiter is spot on in terms of faulting the fetishization of technical skills at least partially for the proliferation of empirical legal scholarship that is irrelevant to the law. This is a problem I’ve discussed at length concerning the future of law and economics. I’ve argued that, at least in economics, these trends have rewarded work that is detached from policy relevance and damaged law and economics at the “retail” level. I’m less familiar with the details of these trends in other fields that contribute to the empirical legal studies more generally, e.g. political science, psychology, sociology and such, but its certainly an issue with empirical law and economics.
Second, Leiter actually understates the problem here. While he correctly points out that the availability of data sets at low cost greases the tracks for empiricists with technical skills to show them off at the expense of legal relevance, the other side of this is at the reduced cost of access to large data sets and analytical tools means that we get more non-technical empirical scholarship too. The problem here is typically quality rather than relevance. Here is Ribstein on the potential causes and consequences of an empirical bubble.
Third, its important to point out how easy it is to overstate the relationship between work that employs the latest and greatest technical econometric methods and legal relevance. There is plenty of work that employs these methods that has great value for legal questions precisely because it allows one to more convincingly identify causal relationships. It need not be the case that highly technical empirical papers are devoid of policy relevance. As I’ve written before:
While I’ve written here before about the potential harms of fetishizing formal mathematical theory rather, especially with respect to law and economics, I see that as a separate problem altogether. Economists need not apologize for uncovering clean causal relationships in less complex settings. There is no shame in adding to our theoretical and empirical understanding of small subjects with the hopes of generalizing to larger phenomena.
In addition, one of the larger problems in law and economics has been the fetishization of theoretical modeling that abstracts too far from real world problems. One additional benefit of the empirical legal studies movement in the broad sense is to get legal scholars with quantitative skills focused on the real world again. This leads into the cute-o-nomics discussion about whether instrumental variables identification strategies, the popularization of which is often attributed to and/or blamed on Steve Levitt depending on who is doing the popularizing, are focused too much of clean identification at the expense of policy relevance:
One might think that at least one important consequence of Levitt’s research agenda, in addition to adding to our economic knowledge (which used to be enough, didn’t it?), will be a contribution to making popular again economics that is more connected to explaining real world phenomena of all types with economic intuition, models, and data. If that happens, Levitt isn’t ruining economics. He’ll be saving it. Or at least making it more relevant. And definitely more fun. If Levitt is going to take the brunt of the attack for “clever” research, at a minimum, we ought to be willing to give credit for sending the pendulum back towards the empirically-oriented side of the spectrum by making it “cool” to worry about the real world again.
If the price of getting increased interest in policy-relevant empirical work is that we also get a proliferation of technical models with narrow and less generalizable results along with it, I’m buying.
Fourth, and on to Leiter’s second claim:
The analytical- and discursive-skill level of ELS scholars appears to be, on average, low, or at least lower than the typical law & economics or law & philosophy interdisciplinary scholar of yesteryear. This isn’t surprising, given that the genre rewards technical skills related to number crunching and data analysis, as well as research design, rather than smarts on your feet, the ability to draw conceptual distinctions, or construct and deconstruct arguments. But the latter intellectual skills are the ones needed in law, both in thinking about law and in teaching law, not the former. Perhaps this is also why discussion of empirical papers typically follows the same tedious pattern of wondering how one controls for this-or-that variable, with the presenter showing, cleverly, how s/he already controlled for it, or admitting that s/he didn’t, so that this is an issue for future work, etc.
I’ve got no position on the empirical claim that ELS scholars, on average, have lower analytical skill levels than earlier generations of law of interdisciplinary scholars. Maybe. I don’t know. I have been feeling a bit sluggish lately. But this argument is easy to overstate, I think. I would guess that the number of empirical legal studies folks doing the sort of highly technical work rewarded by economics departments, for example, is very, very low relative to the number of non-technical folks mixing it up with data sets. This latter group of folks, the non-technical group, probably contributes a dominating fraction of the total empirical legal studies output right now. There are some very good people doing both sets of work, of course. Nobody disputes that. But my sense is that the latter group is much larger. If that is true — and its an empirical question — the impact on the average “legal” analytical skills is likely to be small because the non-technical group are trained to do all the things legal scholars do.
Fifth, I’m pretty certain I disagree with the claim that a “disproportionate amount” of “mediocre work” is traveling under the ELS umbrella. Again — it should be quite obvious from the above that I agree that there is a lot of mediocre work traveling under all sorts of umbrellas. I think Bainbridge and Leiter both agree. There is too much empirical work done because it looks empirical, too much doctrinal work that chases hot topics because they are hot topics, too many articles with catchy titles, too many footnotes, too much behavioral law and economics, too much about the Supreme Court relative to lower courts, etc. Heck, there might even be too many law professors. But disproportionately mediocre? This is another claim with a testable implication. Perhaps one could compare the fraction of ELS work that is never cited to the total output and compare that to other fields. But that might not be a decent measure of “mediocre.” But the the non-empirical legal studies world is huge. And I get enough SSRN emails and see enough papers to suspect that ELS doesn’t seem to have any comparative advantage in mediocrity. On average, I suspect that the relative mediocrity level for those engaging in empirical work is just about right thank you very much. An alternative explanation is that Professor Leiter is reading a disproportionately high amount of bad empirical legal studies work and not enough mediocre doctrinal, non-empirical work. I’ve got a box of reprints that can get to Chicago and solve this imbalance forthwith. Just say the word :)
Sixth, and related to the last point, is that an inherent suspicion of empirical work I think lies beneath much of the conclusion that the work is of low quality. Professor Bainbridge gets at some of that in this post when he writes that empirical work will “always be suspect — and incomplete in my book.” That seems wrong to me. Suspect and incomplete need not be the same thing. And too often the responses I hear from legal scholars to empirical work that actually has legal relevance and is well done is reflexively dismissive: “but you can’t possibly control for everything can you?” Sometimes there will be a short quip about “omitted variable bias” or something like that. In the panel data setting, I’ve often sympathized with the econometrician who tries to explain exactly what state fixed effects are controlling to an audience to whom the answer sounds like he is selling them a used car without an engine. I suspect when Leiter complains of the discussion that follows empirical papers as always involving “the same tedious pattern of wondering how one controls for this-or-that variable, with the presenter showing, cleverly, how s/he already controlled for it, or admitting that s/he didn’t, so that this is an issue for future work, etc.” — that there might be some meaningful information in there. It, in fact, matters a great deal whether something is properly controlled for, whether the error structure is appropriate.
I’m not saying that there is no reason to be suspicious of empirical work. Skepticism about regressions is good. Just like skepticism about other interdisciplinary or doctrinal arguments — indeed, sometimes it is easier to hide assumptions that are driving the analysis in the latter. But it is hard to escape the conclusion that at least some of the objection to empirical work smacks of folks who don’t do a certain type of work convincing themselves that there aren’t missing anything. I previously noted:
I understand that lawyers are going to be suspicious of foreign toolkits, like econometric analysis. But in my view, the reflexive rejection of empirical work because “its hard to control for everything” is about as persuasive as reflexive rejection of claims that there is some coherent theory of statutory interpretation because “the judge just makes it up anyway, doesn’t he?” Both might be true on a case by case basis, but I think the bar that legal scholars face in each case is to take seriously the work of others and describe exactly what the problems are and what implications they have for the results.
Seventh, Leiter’s concern about ELS becoming a mutual admiration society is well founded. More generally, I think for folks doing quantitative empirical work and working in law schools, it is increasingly important both to present work at conferences like ALEA and CELS, as well as “taking the show on the road” more broadly. The former provide a much needed opportunity for critical comments on work at early stages to improve quality; the latter an opportunity to see what the rest of the academy is up to and explain why your work matters.
That’s all I have for now. Let’s talk about it again in November after the next CELS.