So what are the problems we face?
1. An explosion in empirical work. More empirical work is, at some level, a good thing: how can we decide what the best policy is without data? But the explosion in output has been matched by an explosion in the variation in quality (partly because user-friendly software allows people with little training to design empirical projects). The good work has never been better, and the bad work has never been worse. It could very well be that average quality has declined. Some of the bad work comes from honest errors, but some of it comes from cynical manipulation.
2. A bad philosophy of science. Social scientists cling to the idea that we are following in the footsteps of Sir Karl Popper, proposing hypotheses and then rejecting them. We are not. We never have. This is clear in any empirical paper: once the analyst calculates the point estimate, he draws implications from it (“My coefficient of 0.4 implies that a 1% increase in rainfall in Mongolia leads to a 0.4% increase in drug arrests in Brooklyn”). This is not falsification, which only allows him to say “I have rejected 0%.” Social science theory cannot produce the type of precise numerical hypothesis that falsification demands. We are trying to estimate an effect, which is inductive.
3. Limited tools for dealing with induction. Induction requires an overview of an entire empirical literature. Fields like medicine and epidemiology have started to develop rigorous methods for drawing these types of inferences. As far as I can tell, there has been no work in this direction in the social sciences, including ELS, of any sort.
This is partly the result of Problem 2: such overviews would be unnecessary were we actually in a falsificationist world, since all it takes is one black swan to refute the hypothesis that all swans are white.
As a result of these three problems, we produce empirical knowledge quite poorly.
To reuse a joke I’ve made before and will likely make again at least a dozen times this month, Newton’s Third Law roughly holds: for every empirical finding there is an opposite (though not necessarily equal) finding.
How about solutions? John has promised to deliver some answers in future posts over at Prawfs. See also Joe Doherty’s examination of the same issues over at ELS Blog from November 2008.
I believe there is some overlap with the problems John discusses and those I’ve pointed out in discussing the future of law and economics (blog posts available here) — and especially empirical law and economics in law schools.
In the case of law and economics, changes in economic science have led to a brand of L&E and L&E scholars less interested in “retailing” their work to legal academics and policy audiences. This, I think, is a variant of the quality control problem that John is discussing in the empirical legal studies field. John’s post makes me think that the fact that ELS carries a more diversified portfolio in terms of its methodological toolkit is more of a bug than a feature. But I’m not sure.
Also, I wonder about the implications of the technical changes John discusses (reduced cost of producing legal scholarship, especially lower quality scholarship) for legal scholarship. There is a lot of discussion about the available technological means of separating good empirical work from bad empirical work and identifying and synthesizing the information in the numerous “good” studies. These are technological problems. One could imagine lots of ways to reduce these problems with different sorts of quality controls. This is a worthwhile exercise and I look forward to the rest of John’s posts. But what if the real problem is that within the legal academy there is insufficient demand for the sorts of technological changes that would make the world of legal scholarship more empirically sound (i.e. truthier)?