Thom Lambert is Associate Professor of Law at the University of Missouri
Behavioralism is mesmerizing. Ever since I took Cass Sunstein’s outstanding Elements of the Law course as a 1L at the University of Chicago Law School, I’ve been fascinated by studies purporting to show how humans are systematically irrational.
It is, of course, the “systematic” part that’s interesting. We all know that people do irrational things on occasion. What the behavioralists claim is that humans make the same sorts of irrational decisions over and over — that they are, as the title of Dan Ariely’s popular book puts it, “Predictably Irrational.” Advocates of “behavioral law and economics,” then, contend that policymakers (legislators, regulators, judges) should account for these systematic departures from rational choice when they craft legal rules aimed at maximizing welfare. They should not, these advocates assert, presume that individuals are rational self-interest maximizers, as traditional law and economics scholars would assume.
While I’ve long been fascinated by behavioral research, and certainly believe that actual facts about how people behave should trump theory, I’ve been reluctant to sign on to the behavioralist law and economics project. Initially, I harbored suspicions about the research purporting to establish all these systematic cognitive quirks. For example, I believe (though I’m not sure) that I was a subject in one of those coffee mug experiments that purports to establish the endowment effect (i.e., the effect by which people ascribe a higher subjective value to an object if they own it than if they don’t and would have to buy it). We did one of those exercises in one of my law school classes, and reports of the studies often refer to experiments involving law students and coffee mugs. If that’s the sort of experimental data underlying this supposed quirk, it’s hardly robust. Indeed, as Charles Plott and Kathryn Zeiler recently showed, the endowment effect studies reach quite different conclusions when the questions are posed differently.
After reading Ariely’s fascinating book, which provides lots of detail on how various studies were conducted, I’m less concerned about data quality. I still suspect, though, that behavioralists are prone to draw hasty conclusions — both positive and normative — from their experimental findings. I once explained this concern in a short response piece titled Two Mistakes Behavioralists Make, where I criticized two symposium participants for jettisoning rational accounts too quickly in attempting to explain survey findings and for being too quick to advocate governmental solutions to various cognitive quirks (with little regard for government’s own institutional maladies).
The thing that most worries me about the behavioralist law and economics project, though, is the problem of conflicting cognitive quirks. What’s a policymaker to do when one heuristic would lead humans to reach a particular non-rational conclusion and another simultaneously operative heuristic would push in the opposite direction? Which heuristic trumps? Without knowing that, we can’t predict what actions people will take.Consider, for example, chapter one of Cass Sunstein and Richard Thaler’s book, Nudge. That chapter, titled “Biases and Blunders” aims to sketch out the mental shortcuts we humans use in judging the magnitude of risks. Sustein and Thaler discuss the well-known “availability heuristic,” pursuant to which people “assess the likelihood of risks by asking how readily examples come to mind.” They explain that “[i]f people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.” (So, for example, we tend to think that homicide is more common than suicide because we hear about homicides more; in reality, suicide is far more common.) The authors also note that people tend to exhibit a salience bias, which causes them to overestimate the risk of highly salient, emotive (“high affect”) events. In addition, Sunstein and Thaler observe, we humans tend to exhibit an overconfidence bias, which leads us to be overly optimistic about our own abilities to avoid bad outcomes (e.g., 90 percent of drivers believe they are above average behind the wheel).
So what would we predict about human risk judgments when all of these heuristics are simultaneously operative? Take, for example, gay men’s estimates of their risk of contracting HIV. On the one hand, gay men are much more likely to know people infected with HIV (availability heuristic) and to have observed the highly salient, agonizing death of friend or acquaintance suffering from AIDS (salience bias). On the other hand, because the behavior leading to HIV infection is generally voluntary, the overconfidence bias is likely to kick in. Which bias would we expect to trump?
Sunstein and Thaler point to gay men’s perceptions of their own HIV risk as exemplifying the overconfidence bias: “Gay men systematically underestimate the chance that they will contract AIDS, even though they know about AIDS risks in general.” But what happened to the availability heuristic and the salience bias?
Perhaps I’m demanding too much here. Even the rational choice model can’t predict human judgments when individual preferences push in different directions (e.g., will a lawyer who values money, leisure, and the life of the mind give up a lucrative law firm job to become a law professor?). But I do think that if we’re going to complicate the rational choice model with a bunch of quirky “exceptions,” we need some account of how the quirks interact when they conflict. Otherwise, we won’t be able to say that humans are predictably irrational.
Absent some solution to the conflicting quirks problem, the behavioral law and economics project may be susceptible to the sort of critique legal realists once launched against formalists: “Your scientific, supposedly non-ideological means of selecting among policies (i.e., Pick the policy likely to maximize welfare, given humans’ predictable irrationalities) really masks a political judgment.” Karl Llewellyn famously asserted this argument against formalists, noting that many of the supposedly value-free canons of construction that guide judicial interpretation are, in fact, conflicting. Llewellyn pointed to 28 pairs of well-established canons of construction that seem to contradict one another. A judge purporting merely to “interpret” the law, he said, could essentially reach any outcome he preferred simply by picking and choosing among governing canons. Might not the same be said for behavioral law and economics, given the conflicting quirks problem? Does it not seem odd that the quirks the behavioralists observe are almost always taken to justify a paternalistic fix (or, at a minimum, a quasi-paternalistic nudge)?