Professor Bainbridge has a provocative post up taking on empirical legal scholarship generally. The While the Professor throws a little bit of a nod toward quantitative work, suggesting it might at least provide some “relevant gist for the analytical mill,” he concludes that “it’s always going to be suspect — and incomplete — in my book.” Here’s a taste:
And then there’s a recent paper I had to read on executive compensation, whose author should remain nameless. The author found a statistically significant result that he didn’t like. So he threw it under the bus by claiming that his regressions were flawed. Accordingly, he turned to panel data analyses that gave him a result he liked. All the while, another paper on the same topic had found the same results as our author’s regressions. Who was it that said statistics don’t lie?
On top of which, of course, there’s the problem that the number crunchers can only tell you something when they’ve got numbers to crunch. Suppose there was a change in the law in 2000. A before and after comparison might be instructive. But companies weren’t required to disclose the relevant information until 2005. You don’t have anything to measure.
I’m tempted to ask how many legal theory debates have been resolved convincingly by a single paper? But instead I’ll try to do something more constructive. At least I hope so. Larry Ribstein chimes in to make the well taken starting point that theory itself is not useful without data. That is obviously right. And I think if one were too ask whether the legal literature was suffering from too many theories of too much empirical knowledge — I’d opt for the latter. But, I’ve got a few different bones to pick.
The first is that even holding regressions and sophisticated quantitative analysis aside for the moment — we’ll come back to it — is that the data should constrain the theory. In fields that I am familiar with, there is a great deal of legal scholarship that simply ignores the few stylized facts or empirical regularities established by the empirical literature but builds theories and rattles off policy implications. Of course, the theory of legal theorists rejecting an empirical methodology that restricts their ability to generate theory willy nilly is not one that can be casually rejected. I mean, unconstrained theory does sound like more fun doesn’t it?
Second, I read Bainbridge’s post as revealing a common tendency in the legal academy to dismiss empirical evidence if it, alone, is not sufficient to resolve some policy debate. Empirical evidence is hard to collect and a body of empirical knowledge builds over time. My sense is that the intuitive gut instinct of law professors is that the role of empirical scholarship is to “prove” assertion X in the way that one might establish the proper interpretation of a contract or statute. Fads in legal scholarship are another example of this high discount rate in the academy. For example, I’ve complained before about what I think is the over-use of behavioral law and economics, lack of rigor in drawing out policy implications from the evidence and models, and insufficient attention to empirical data. There is a premium in the legal literature on striking while the topic is hot, and on over-claiming (and of course, having a catchy paper title) as well, and without expert peer review of claims concerning the relevant empirical literature, perhaps the latter is to be expected. In any event, the point is that to the extent that “law and social science” disciplines like law and economics want to be taken seriously, and claim the advantages of the ancillary discipline, they cannot simultaneously reject the methodological commitments that come with it — even if those come with the price of things moving a bit slower than the law review (or news) cycle.
Third, paragraphs like Bainbridge’s first make me wonder whether this attitude (not his specifically) about empirical work are informed judgments or just reflexive tendencies to question sophisticated models that are outside our own strike zone? Knowing nothing about the paper that the Professor is talking about in his example, one can in the abstract think of lots of reasons an empiricist might run a plain vanilla OLS specification as a baseline, report the results, go on to suggest that OLS in this setting as various problems, and move on to some more sophisticated panel approach, and compare the more robust results to contrasting ones in the literature. Of course, the sort of specification search that Steve hints at could also be going on too. I’ve got no horse in that race. But the more general point is that often the critiques of various econometric models by non-econometricians sometimes betray a thinly veiled anti-empirical bias that I think is often dressed up in the language of “omitted variable” bias. I can vouch for having given dozens of workshops at law schools where no discussion of panel data techniques and description of what exactly fixed effects control for is sufficient to respond to the “but did you control for X, Y and Z” question.
In sum, there is a lot of empirical work out there that is worthy of suspicion. There is work that it methodologically unsound, suffers from poor and unreliable data or from authors that overclaim. But there is a lot of really good stuff out there too! Data is getting cheaper and methods more sophisticated. Of course, the reduced cost can lead to the types of problems that Steve raises. But quality problems can and do also occur in doctrinal scholarship pplying other, non-quantitative methodologies to interpret statutes, synthesize cases, make historical claims, or construct theoretical models. The devil in these arguments is typically in the details no matter what the methodological toolkit. And rigorous academic discourse is often about identifying and exposing those details to evaluate claims and hopefully, answer questions that can move the literature forward. I understand that lawyers are going to be suspicious of foreign toolkits, like econometric analysis. But in my view, the reflexive rejection of empirical work because “its hard to control for everything” is about as persuasive as reflexive rejection of claims that there is some coherent theory of statutory interpretation because “the judge just makes it up anyway, doesn’t he?” Both might be true on a case by case basis, but I think the bar that legal scholars face in each case is to take seriously the work of others and describe exactly what the problems are and what implications they have for the results. Let me be absolutely clear that I do not view Professor Bainbridge’s post as committing that error — it IS a blog post after all — but it did get me thinking about this issue of empirical work and its reception in the broader legal community more generally.
Matt Bodie wrote:
Matt: I am curious — which verities from the pre-2008 corporate finance literature turned out to be unreliable?
I read Bainbridge’s comment as a sign of complete and utter cluelessness in all matters related to empirical research. Suppose we take for granted his description that an author of some paper reported some results, disliked them, claimed his own methodology to be flawed, and turned to panel techniques to get the preferred result. Well, first, it shows significant academic integrity — after all, the author didn’t have to report the results he didn’t like. Second, when the author claims methodological flaws in a regression, he has to present solid evidence thereof. Most of the time, this argument is either correct or not, without the “it depends” option. You derive asymptotic properties of an estimator, and there isn’t much space for debate about those properties. You can’t just say “ignore the results in Table 3” without producing mathematical explanations for this, which will be objectively evaluated by readers. Third, in some cases, theoretical econometricians truly disagree about properties of estimators. These disagreements are well known. If you present such an estimator, a referee will demand robustness checks. You can’t pick-and-choose the estimator that gives your preferred results without being noticed. Even if the right specification is truly unclear, you aren’t gaining much from it because you’ll have to show the results of all alternative specifications. Finally, I didn’t get that stuff about panel data. The data either required the use of panel techniques or it didn’t. If it did, the author might have presented non-panel results next to panel results specifically to show how the wrong specification can produce spurious results. If panel technique was not warranted by the data, presenting those results will do the author no good.
In short, Bainbridge’s comment reads like a complaint by a person who doesn’t understand what quants are doing and therefore trying to convince himself (and others) that what quants are doing isn’t really meaningful. Meah.
Oh, I don’t get that from Bainbridge’s post at all. Rather, I get the assertion that all empirical scholarship is inherently limited. Of course, its true. Same with theory and doctrinal scholarship. There are costs and benefits to each of course.
But to your point, I would actually not criticize a paper that says that the data says the real world looks like X but I think those analyses are flawed and so here’s a theory about how to do ___ legal analysis when the world really looks like Y. My point is that there is a lot of scholarship that writes in a vacuum as if the empirical scholarship has not provided *any* facts that constrain theory when it, in fact, has. And if the author disagrees and thinks the empiricists are missing something important — they should say so. I think too many papers just simply glibly dismiss empirical evidence because … well, “there is a lot of stuff to control for” without offering real critiques. Those critiques have an important place in the literature. I’m NOT saying that every theory paper has to make them — but some do. And in my view, more than actually do in practice.
And just to be clear, I never said the theoretical paper has to canvass the empirical data before moving on to theory. At least, I don’t think I did. And I didn’t mean to imply it. What I said was that the value of papers that ignore the data entirely is, on average, lower than papers that do not. There’s a distribution, and some high value papers that ignore available evidence.
Yes, that helps a lot. But I still think the devil is in the details here. I would imagine that if you talked to the authors who ignored the empirical data, a lot of them would think that the research on X, Y, and Z is incomplete, inconclusive, or irrelevant. Of course, some of it may just be bad scholarship. But I think what Bainbridge is reacting to is the notion that *only* empirical research is good scholarship. He goes a lot farther than that, and I agree with your criticisms. But on the other hand, I don’t think every theoretical paper has to canvass the empirical data first before moving on to theory.
Sure Matt. I’m not sure what it means for the facts that were established to become unreliable — perhaps you mean something like, implications people drew from those facts turned out to be unreliable? Anyway, I think I understand the question. And I drafted a bit inartfully here — so let me clarify. What I was talking about were theoretical papers that simply ignore the empirical evidence altogether. I think its perfectly fine if one wants to write a theoretical paper that begins with the premise that assumes facts X, Y and Z even though none of them are currently established. Of course, it would be helpful in that case to at least talk about the facts, or acknowledge that assumption that the theory relies on conditions X,Y,Z which are inconsistent with the thus far empirical knowledge, etc. Further, the comment was especially directed at criticizing papers that ignore empirical evidence and go on to recommend policy changes now — let us not allow facts to get in the way of sufficiently clever theory and all of that.
So — we can talk about the relevant “weight” of theoretical scholarship that examines the world outside the current empirical understanding — but I agree with the principle that this work at least has some value (and potentially quite a bit). If there could be agreement that theoretical work completely unhinged from real world facts is less likely to be valuable, that would be a good start.
I was wondering if you could follow up on this comment: “In fields that I am familiar with, there is a great deal of legal scholarship that simply ignores the few stylized facts or empirical regularities established by the empirical literature but builds theories and rattles off policy implications.” Because to me, the devil is in the details. Up until September 2008 you would have had a number of verities in the corporate finance literature that were seen as “stylized facts or empirical regularities.” But many of them turned out to be unreliable. I agree that the answer is not to give up trying to collect and use the data. But on the other hand, it seems to swing too far in the other direction to say that scholarship is problematic if it doesn’t hew to the facts that have been established (thus far) by empirical work.
From a tribute by George Schultz to some guy named Milton:
I have one final comment and this is lightly methodological. Now, we know Milton’s prowess as an intellect, as a theorist, but we also know and has been brought out by earlier comments, how insistent he is that if you have a theory, you better test it with facts and a theory that can’t be tested better be reformulated so it can be tested. And you’ve got to bring the facts to bear. So I know, Milton, you’re tone deaf, but it isn’t going to bother you. I’m going to sing, but you won’t know it. So here’s my song – my methodological tribute.
A fact without a theory is like a ship without a sail; is like a boat without a rudder; is like a kite without a tail. A fact without a theory is as sad as sad can be, but if there’s one thing worse in this universe, it’s a theory – I said a theory, I mean a theory – without a fact.