Below is a graph illustrating the number of citations to selected antitrust publications in federal courts from 2003 – 2011. The full study is available on the Antitrust Source website and updates previous data collected by Jonathan Baker on behalf of the Antitrust Law Journal Editorial Board.
Disclosure: I am a member of the Antitrust Law Journal Editorial Board and the Editorial Advisory Board for Competition Policy International’s Antitrust Chronicle. Special thanks to my research assistant Stephanie Greco for her work on this.
This paper shows close connections between CEOs’ vacation schedules and corporate news disclosures. Identify vacations by merging corporate jet flight histories with real estate records of CEOs’ property owned near leisure destinations. Companies disclose favorable news just before CEOs leave for vacation and delay subsequent announcements until CEOs return, releasing news at an unusually high rate on the CEO’s first day back. When CEOs are away, companies announce less news than usual and stock prices exhibit sharply lower volatility. Volatility increases immediately when CEOs return to work. CEOs spend fewer days out of the office when their ownership is high and when the weather at their vacation homes is cold or rainy.
Around 1,500 Global Competition Review (GCR) readers cast their votes, honoring outstanding individuals in such areas as competition law and economics around the world. GCR is the world’s leading antitrust and competition law journal and news service. The Academic Excellence Award recognizes a highly regarded academic and was presented to Professor Salop at GCR’s 2nd Annual Charity Awards Dinner in Washington, DC. In addition to being a senior consultant to CRA, Dr. Salop is a professor of economics and law at the Georgetown University Law Center in Washington, DC, where he teaches antitrust law and economics and economic reasoning for lawyers.
Forbes interviews my colleague and office neighbor David Schleicher on his new and very interesting paper, City Unplanning. This paper continues Schleicher’s interesting line of research on the law and economics of cities with a creative and powerful analysis of the political economy of zoning in big cites.
Here’s a brief snippet from the start of the interview:
For starters, how about a brief rundown of your story of why housing in major cities is so expensive.
Generations of scholars assumed that, while exclusive suburbs use zoning rules to limit development to keep people out and to increase the average value of housing, big cities don’t do that kind of thing because they are run by “growth machines” or ever more powerful coalitions of developers and the politicians who love them.
But in fact for most of the Twentieth Century, when urban housing prices went up, people starting building housing and prices went down. But, at some point, this broke down.
In a number of big cities, new housing starts seem uncorrelated or only weakly correlated with housing prices and the result of increasing demand while holding supply steady is that price went up fast. The average cost of a Manhattan apartment is now over $1.4 million and the average monthly rent is over $3,300.
The only explanation is that zoning rules stop supply from increasing in the face of rising demand. (In case you are wondering, this not a bubble phenomenon—this happened in many cities before the housing bubble, and the behavior of housing markets during and after the crisis is completely consistent with a story about big city housing supply constraints.) And it’s not like real estate developers suddenly became political weaklings. What gives?
The key to my story is that urban legislatures don’t have competitive local parties—we don’t see big city legislatures divided between Republicans and Democrats, each trying to create a localized brand for competence on local issues. Instead, most local legislatures are either non-partisan or dominated by one party.
As a result, there is no one with the power and incentives to strike deals between legislators in order to promote things that are good for people across the city. And there is no one to decide the order in which issues are decides, which matters when legislative preferences “cycle,” or there are majorities that prefer a to b, b to c, and c to a.
The result of the lack of competitive local parties is that procedural rules matter a lot—they set the voting order, which can determine the outcome.
Part II of the interview is available here. The abstract is here:
Generations of scholarship on the political economy of zoning have tried to explain a world in which tony suburbs run by effective homeowner lobbies use zoning to keep out development, but big cities allow relatively untrammeled growth because of the political influence of developers. Further, this literature has assumed that, while zoning restrictions can cause “micro-misallocations” inside a metropolitan region, they cannot increase housing prices throughout a region because some of the many local governments in a region will allow development. But these theories have been overtaken by events. Over the past few decades, land use restrictions have driven up housing prices in the nation’s richest and most productive regions, resulting in massive changes in where in America people live and reducing the growth rate of the economy. Further, as demand to live in them has increased, many of the nation’s biggest cities have become responsible for substantial limits on development. Although developers are, in fact, among the most important players in city politics, we have not seen enough growth in the housing supply in many cities to keep prices from skyrocketing.
This paper seeks to explain these changes with a story about big city land use that places the legal regime governing land use decisions at its center. Using the tools of positive political theory, I argue that, in the absence of strong local political parties, land use law sets the voting order in local legislatures, determining policy from potentially cycling preferences. Specifically, these laws create a peculiar procedure, a form of seriatim decision-making in which the intense preferences of local residents opposed to re-zonings are privileged against more weakly-held citywide preferences for an increased housing supply. Without a party leadership to organize deals and whip votes, legislatures cannot easily make deals for generally-beneficial legislation stick. Legislators, who may have preferences for building everywhere to not building anywhere, but stronger preferences for stopping construction in their districts, “defect” as a matter of course and building is restricted everywhere. Further, the seriatim nature of local land use procedure results in a large number of “downzonings,” or reductions in the ability of landowners to build “as of right”, as big developers do not have an incentive to fight these changes. The cost of moving amendments through the land use process means that small developers cannot overcome the burdens imposed by downzonings, thus limiting incremental growth in the housing stock.
Finally, the paper argues that, as land use procedure is the problem, procedural reform may provide a solution. Land use and international trade have similarly situated interest groups. Trade policy was radically changed, from a highly protectionist regime to a largely free trade one, by the introduction of procedural reforms like the Reciprocal Trade Agreements Act, adjustment assistance, and “safeguards” measures. The paper proposes changes to land use procedures that mimic these reforms. These changes would structure voting order and deal-making in local legislatures in a way that would create support for increases in the urban housing supply.
An interesting post on the University of Pennsylvania Reg Blog from Michael Abramowicz, Ian Ayres, and Yair Listokin (AAY) on “Randomizing Regulation,” based upon their piece in the U Penn L. Rev.
If legislators disagree about the efficacy of a proposed policy, why not resolve the disagreement with a bet? One approach would be to impose one policy approach randomly on some members of the population, but not on others, to determine whether the policy meets its goals. This solution would overcome the measurement problems of conventional regression analysis and would provide a useful way to compare regulations and promote bipartisan agreement. Legislators might agree that once such a test is complete, the winning approach would apply to everyone.
For example, regulators could test the Sarbanes-Oxley Act’s most controversial provisions, such as those requiring public companies to institute internal controls and then to have their CEOs and CFOs certify their financial statements, by randomly repealing one or more of those provisions for some corporations for some period of time. Randomization would enable analysts to determine which regulatory regime is optimal by assessing which test-group of corporations has the highest level of success, whether measured by stock price, investor confidence in financial reporting, lack of fraud, or other yardsticks.
Conventional statistical and econometric analytical techniques are often used to measure the efficacy of statutes and regulations, but they face problems that randomized trials would not. Researchers may purposefully or mistakenly omit variables from their regression analyses, leading to incorrect results. Publishers are more likely to feature work that provides statistically significant results, even if those results are not correct, a phenomenon known as publication bias.
No doubt many economists and empiricists are nodding their heads in agreement and drooling at the opportunity to more accurately identify and measure the effects of regulation. Randomization would allow application of techniques far superior to what is typically used. AAY discuss some of the common critiques of randomization in the blog post, and at greater length in the paper. The longer version is worth reading, but here is the short version from the blog post:
Ethical concerns are important, but may not present a significant barrier to using randomized tests. While legal randomized tests would lack the informed consent provided in medical experiments, the government regularly imposes regulations on the public – within constitutional and other legal bounds. Also, randomization sometimes makes the imposition more equal than regulation imposed using predetermined criteria. We tend to think it is worse to impose rules on people because the selected people are unpopular rather than simply because they were selected randomly.
How should randomized trials work? The experiments should be large enough to produce meaningful results. The test groups, meanwhile, should be the smallest possible without changing the results outside those test groups. For example, driving speed limits cannot be randomized at the individual level because such a test group size would significantly increase the risk of accidents. However, the test group could be at the county level.
Experiments should also be of sufficiently long durations to prevent test subjects from changing their behavior temporarily for the duration of the experiment. For example, if different income tax levels are imposed on different people to see if imposing a higher income tax reduces work output, an experiment of short duration would be more likely to be biased. Workers could wait out a temporary increase in income tax level by temporarily working less, and plan to work more once their income tax level decreases.
There is no problem, under current standards of judicial review, with administrative agencies testing out different regulations on their own. Agencies could put their proposed experimental regulations through the regular notice and comment process. After running the experiment, the agencies could provide a randomization impact statement explaining why the agency decided to test regulations through that process, describing the experiment, and providing its results. Because randomization provides for more objective analysis of policy results, courts should be more deferential in conducting hard look review to agencies that have selected policies through this approach.
One in five academics in a variety of social science and business fields say they have been asked to pad their papers with superfluous references in order to get published. The figures, from a survey published today in Science, also suggest that journal editors strategically target junior faculty, who in turn were more willing to acquiesce.
I think reference bloat is a problem, particularly in management journals (not so much in economics journals). Too many papers include tedious lists of references supporting even trivial or obvious points. It’s a bit like blog entries that ritually link every technical term or proper noun to its corresponding wikipedia entry. “Firms seek to position themselves and acquire resources to achieve competitive advantage (Porter, 1980; Wernerfelt, 1984; Barney, 1986).” Unless the reference is non-obvious, narrowly linked to a specific argument, etc., why include it? Readers can do their Google Scholar searches if needed.
In management this strikes me as a cultural issue, not necessarily the result of editors or reviewers wanting to build up their own citation counts. But I’d be curious to hear about reader’s experiences, either as authors or (confession time!) editors or reviewers.
With all due respect to management journals for requiring citations for authority that water runs downhill, demand curves slope downward and so forth, I’ve got my money on the law reviews.
I’ve posted a new project in progress (co-authored with Angela Diveley) to SSRN. In “Do Expert Agencies Outperform Generalist Judges?”, we attempt to examine the relative performance FTC Commissioners and generalist Article III federal court judges in antitrust cases and find some evidence undermining the oft-invoked assumption that Commission expertise leads to superior performance in adjudicatory decision-making. Here is the abstract:
In the context of U.S. antitrust law, many commentators have recently called for an expansion of the Federal Trade Commission’s adjudicatory decision-making authority pursuant to Section 5 of the FTC Act, increased rulemaking, and carving out exceptions for the agency from increased burdens of production facing private plaintiffs. These claims are often expressly grounded in the assertion that expert agencies generate higher quality decisions than federal district court judges. We call this assertion the expertise hypothesis and attempt to test it. The relevant question is whether the expert inputs available to generalist federal district court judges translate to higher quality outputs and better performance than the Commission produces in its role as an adjudicatory decision-maker. While many appear to assume agencies have courts beat on this margin, to our knowledge, this oft-cited reason to increase the discretion of agencies and the deference afforded them by reviewing courts is void of empirical support. Contrary to the expertise hypothesis, we find evidence suggesting the Commission does not perform as well as generalist judges in its adjudicatory antitrust decision-making role. Furthermore, while the available evidence is more limited, there is no clear evidence the Commission adds significant incremental value to the ALJ decisions it reviews. In light of these findings, we conclude there is little empirical basis for the various proposals to expand agency authority and deference to agency decisions. More generally, our results highlight the need for research on the relationship between institutional design and agency expertise in the antitrust context.
We are in the progress of expanding the analysis and, as always, comments welcome here or at my email address on the sidebar.
I am pleased to pass along the following information regarding Olin-Smith-Searle Fellowships for the upcoming 2012-13 academic year. The application deadline is March 15, 2012.
2012 – 2013
The Olin-Searle-Smith Fellows in Law program will offer top young legal thinkers the opportunity to spend a year working full time on writing and developing their scholarship with the goal of entering the legal academy. Up to three fellowships will be offered for the 2012-2013 academic year.
A distinguished group of academics will select the Fellows. Criteria include:
Dedication to teaching and scholarship
A J.D. and extremely strong academic qualifications (such as significant clerkship or law review experience)
Commitment to the rule of law and intellectual diversity in legal academia
The promise of a distinguished career as a legal scholar and teacher
Stipends will include $50,000 plus benefits. While details will be worked out with the specific host school for the Fellow, in general the Fellow will be provided with an office and will be included in the life of the school. Fellows are not expected to hold other employment during the term of their fellowships.
All those who feel they fit the criteria are encouraged to apply. Applicants should submit the following:
A resume and law school transcript
Academic writing sample(s) with an approximately 50-page limit on the total number of pages submitted (i.e. two 25-page pieces are fine, two 50-page pieces are not)
A brief discussion of their areas of intellectual interest (approximately 2 pages)
A statement of their commitment to teaching law
At least two and generally no more than three letters of support. These should come from people who can speak to your academic potential and should generally include at least two letters from law professors. If you are doing interdisciplinary work a letter from someone who can speak to your work in that area is also helpful. You may also include additional references with phone numbers.
Applications must be received no later than March 15, 2012.
Applicants will be notified in early to mid-May 2012.
Please submit applications to:
Olin-Searle-Smith Fellows in Law Program
ATTN: Tyler Lowe
c/o The Federalist Society
1015 18th Street, N.W., Suite 425
Washington, D.C. 20036
Judge Ginsburg and I are working on a project for an upcoming festschrift in honor of Bill Kovacic. The project involves the role of settlements in the pursuit of the goals of antitrust. In particular, we are looking for examples of antitrust settlements between competition agencies and private parties — in the U.S. or internationally — involving conditions either: (1) clearly antithetical to consumer welfare, or (2) that arguably disserve consumer welfare. In the former category, examples might include conditions requiring firms to make employment commitments. The second category might include conditions placing the agency in an ongoing regulatory role or restricting the firm’s ability to engage in consumer-welfare increasing price or non-price competition.
I turn to our learned TOTM readership for help. Please feel free to leave examples in the comments here — or email me. Cites and links appreciated.
In its recent report entitled “The Evolving IP Marketplace,” the Federal Trade Commission (FTC) advances a far‐reaching regulatory approach (Proposal) whose likely effect would be to distort the operation of the intellectual property (IP) marketplace in ways that will hamper the innovation and commercialization of new technologies. The gist of the FTC Proposal is to rely on highly non-standard and misguided definitions of economic terms of art such as “ex ante” and “hold-up,” while urging new inefficient rules for calculating damages for patent infringement. Stripped of the technicalities, the FTC Proposal would so reduce the costs of infringement by downstream users that the rate of infringement would unduly increase, as potential infringers find it in their interest to abandon the voluntary market in favor of a more attractive system of judicial pricing. As the number of nonmarket transactions increases, the courts will play an ever larger role in deciding the terms on which the patents of one party may be used by another party. The adverse effects of this new trend will do more than reduce the incentives for innovation; it will upset the current set of well-‐functioning private coordination activities in the IP marketplace that are needed to accomplish the commercialization of new technologies. Such a trend would seriously undermine capital formation, job growth, competition, and the consumer welfare the FTC seeks to promote.
Focusing in particular on SSOs, the trio homes in on the potential incentive problem created by the FTC’s proposal:
The central problem with the FTC’s approach is that it would interfere seriously with the helpful incentives all parties in the IP marketplace presently have to contract with each other. The FTC’s approach ignores the powerful incentives that it creates in putative licenses to spurn the voluntary market in order to obtain a strategic advantage over the licensor. In any voluntary market, the low rates that go to initial licensees reflect the uncertainty of the value of the patented technology at the time the license is issued. Once that technology has proven its worth, there is no sound reason to allow any potential licensee who instead held out from the originally offered deal to get bargain rates down the road. Allowing such an option would make the holdout better off than the contracting party. Such holdouts would not need to take licenses for technologies with low value, while resting assured they would still get technologies with high value at below market rates. The FTC seems to overlook that a well-‐functioning patent damage system should do more than merely calibrate damages after the fact. An efficient approach to damages is one that also reduces the number of infringements overall by making sure that the infringer cannot improve his economic position by his own wrong.
The FTC Proposal rests on the misguided conviction that the law should not allow a licensor to “demand and obtain royalty payments based on the infringer’s switching costs” once the manufacturer has “sunk costs into using the technology;” and it labels any such payments as the result of “hold-up.”
As Epstein, et al. discuss, current private ordering (reciprocal dealing, repeat play, RAND terms, etc.) works perfectly well to address real hold-up problems, and the FTC seems to be both defining the problem oddly and, thus, creating a problem that doesn’t really exist.
Our book, Competition Policy and Patent Law Under Uncertainty: Regulating Innovation will be published by Cambridge University Press in July. The book’s page on the CUP website is here.
I just looked at the site to check on the publication date and I was delighted to see the advance reviews of the book. They are pretty incredible, and we’re honored to have such impressive scholars, among the very top in our field and among our most significant influences, saying such nice things about the book:
After a century of exponential growth in innovation, we have reached an era of serious doubts about the sustainability of the trend. Manne and Wright have put together a first-rate collection of essays addressing two of the important policy levers – competition law and patent law – that society can pull to stimulate or retard technological progress. Anyone interested in the future of innovation should read it.
Daniel A. Crane, University of Michigan
Here, in one volume, is a collection of papers by outstanding scholars who offer readers insightful new discussions of a wide variety of patent policy problems and puzzles. If you seek fresh, bright thoughts on these matters, this is your source.
Harold Demsetz, University of California, Los Angeles
This volume is an essential compendium of the best current thinking on a range of intersecting subjects – antitrust and patent law, dynamic versus static competition analysis, incentives for innovation, and the importance of humility in the formulation of policies concerning these subjects, about which all but first principles are uncertain and disputed. The essays originate in two conferences organized by the editors, who attracted the leading scholars in their respective fields to make contributions; the result is that rara avis, a contributed volume more valuable even than the sum of its considerable parts.
Douglas H. Ginsburg, Judge, US Court of Appeals, Washington, DC
Competition Policy and Patent Law under Uncertainty is a splendid collection of essays edited by two top scholars of competition policy and intellectual property. The contributions come from many of the world’s leading experts in patent law, competition policy, and industrial economics. This anthology takes on a broad range of topics in a comprehensive and even-handed way, including the political economy of patents, the patent process, and patent law as a system of property rights. It also includes excellent essays on post-issuance patent practices, the types of practices that might be deemed anticompetitive, the appropriate role of antitrust law, and even network effects and some legal history. This volume is a must-read for every serious scholar of patent and antitrust law. I cannot think of another book that offers this broad and rich a view of its subject.
Herbert Hovenkamp, University of Iowa
With these contributors:
Robert Cooter, Richard A. Epstein, Stan J. Liebowitz, Stephen E. Margolis, Daniel F. Spulber, Marco Iansiti, Greg Richards, David Teece, Joshua D. Wright, Keith N. Hylton, Haizhen Lee, Vincenzo Denicolò, Luigi Alberto Franzoni, Mark Lemley, Douglas G. Lichtman, Michael Meurer, Adam Mossoff, Henry Smith, F. Scott Kieff, Anne Layne-Farrar, Gerard Llobet, Jorge Padilla, Damien Geradin and Bruce H. Kobayashi
I would have said the book was self-recommending. But I’ll take these recommendations any day.
Russell Korobkin (UCLA) provocatively declares the ultimate victory of behavioral law and economics over neoclassical economics:
I am declaring victory in the battle for the methodological soul of the law and economics discipline. There is no need to continue to pursue the debate between behavioralists (that is, proponents of incorporating insights previously limited to the discipline of psychology into the economic analysis of legal rules and institutions) and the defenders of the traditional faith in individual optimization as a core analytical assumption of legal analysis.
Behavioral law and economics wins. And its not close. Korobkin continues:
[T]he battle to separate the economic analysis of legal rules and institutions from the straightjacket of strict rational choice assumptions has been won, at least by and large. The fundamental methodological assumption of rational-choice economics, that individual behavior necessarily maximizes subjective expected utility, given constraints, has been largely discredited as an unyielding postulate for the analysis of legal policy. Yes, such an assumption, even if inaccurate, simplifies the world, but it does so in an unhelpful way, much in the way that it is unhelpful for a drunk who has lost his car keys in the bushes to search under the streetlamp because that is where the light is.
The paper is remarkable on many levels, few of them positive. I understand Professor Korobkin is trying to be provocative; in this he succeeds. I — for one — am provoked. But one problem with claims designed to provoke is that they may sacrifice other virtues in exchange for achieving the intended effect. In this case, humility and accuracy are the first — but not the last — to go. Indeed, Korobkin begins by acknowledging (and marginalizing) those would deny victory to the behaviorists while magnanimously offering terms of surrender:
Not everyone has been won over, of course, but enough have to justify granting amnesty to the captured and politely ignoring the unreconstructed.
Unreconstructed. I guess I’ll have to take that one. Given the skepticism I’ve expressed (with Douglas Ginsburg) concerning behavioral law and economics, and in particular, the abuse of the behavioral economics literature by legal scholars, it appears capture is unlikely. Indeed, Judge Ginsburg and I are publishing a critique of the behavioral law and economics movement — Behavioral Law and Economics: Its Origins, Fatal Flaws, and Implications for Liberty — in the Northwestern Law Review in January 2012. A fuller development of the case for skepticism about behavioral law and economics can wait for the article; it suffices for now to lay out a few of the most incredible aspects of Korobkin’s claims.
Perhaps the most incendiary aspect of Korobkin’s paper is not a statement, but an omission. Korobkin claims that rational choice economics has been “largely discredited as an unyielding postulate for the analysis of legal policy” — and then provides no citation for this proposition. None. Not “scant support,” not “conflicting evidence” — Korobkin dismisses rational choice economics quite literally by fiat. We are left to infer from the fact that legal scholars have frequently cited two important articles in the behavioral law and economics canon (the 1998 article A Behavioral Approach to Law and Economics by Christine Jolls, Cass Sunstein and Richard Thaler and Law and Behavioral Science: Removing the Rationality Assumption from Law and Economics by Korobkin and Tom Ulen) that the behavioral approach has not only claimed victory in the marketplace for ideas but so decimated rational choice economics as to leave it discredited and “unhelpful.” One shudders to consider the legion of thinkers chagrinned by Korobkin’s conclusive declaration.
Oh, wait. The citations prove the behavioral law and economics is popular among legal scholars — and that’s about it. I’ve no doubt that much is true. If Korobkin’s claim was merely that behavioral law and economics has become very popular, I suppose that would be a boring paper, but the evidence would at least support the claim. But the question is about relative quality of insight and analysis, not popularity. Korobkin acknowledges as much, observing in passing that “Citation counts do not necessarily reflect academic quality, of course, but they do provide insight into what trends are popular within the legal academy.” Undaunted, Korobkin moves seemlessly from popularity to the comparative claim that behavioral law and economics has “won” the battle over rational choice economics. There is no attempt to engage intellectually on the merits concerning relative quality; truth, much less empirical validation, is not a mere matter of a headcount.
Even ceding the validity citations as a metric to prove Korobkin’s underlying claim — the comparative predictive power of two rival economic assumptions — what is the relative fraction of citations using rational choice economics to provide insights into legal institutions? How many cites has Posner’s Economic Analysis of Law received? Where is the forthcoming comparison of articles in the Journal of Law and Economics, Journal of Legal Studies, Journal of Political Economy, Journal of Law, Economics, and Organization, American Economic Review, etc.? One might find all sorts of interesting things by analyzing what is going on in the law and economics literature. No doubt one would find that the behaviorists have made significant gains; but one expecting to find rational choice economics has been discredited is sure to to be disappointed by the facts.
Second, notice that the declaration of victory comes upon the foundation of citations to papers written in 1998 and 2000. The debate over the law and economics of minimum resale price maintenance took nearly a century to settle in antitrust law, but behavioral law and economics has displaced and discredited all of rational choice economics in just over a decade? The behavioral economics literature itself is, in scientific terms, very young. The literature understandably continues to develop. The theoretical and empirical project of identifying the conditions under which various biases are observed (and when they are not) is still underway and at a relatively early point in its development. The over-reaching in Korobkin’s claim is magnified when one considers the relevant time horizon: impatience combined with wishful thinking is not a virtue in scientific discourse.
Third, it is fascinating that it is consistently the lawyers, and mostly law professors, rather than the behavioral economists, that wish to “discredit” rational choice economics. Similarly, rational choice economists generally do not speak in such broad terms about discrediting behavioral economics as a whole. Indeed, behavioral economists have observed that “it’s becoming clear that behavioral economics is being asked to solve problems it wasn’t meant to address. Indeed, it seems in some cases that behavioral economics is being used as a political expedient, allowing policymakers to avoid painful but more effective solutions rooted in traditional economics.” There are, of course, significant debates between theorists concerning welfare implications of models, from empiricists interpreting experiments and field evidence. It is the law professors without economic training that want to discredit a branch of economics. It is important to distinguish here between behavioral economics and behavioral law and economics, and between rational choice economics and its application to law. No doubt there are applications of rational choice economics to law that overreach and warrant deserved criticism; equally, there are abuses of behavioral economics in the behavioral law and economics literature. It is a very productive exercise, and one in which law professors might have a comparative advantage, to identify and criticize these examples of overreaching in application to law. But with all due respect to Professor Korobkin, if rational choice economics is going to be discredited — a prospect I doubt given its success in so many areas of the law — some economists are going to have to be involved.
Fourth, in the midst of declaring victory over rational choice economics, Korobkin doesn’t even bother to define rational choice economics correctly. Korobkin writes:
To the extent that legal scholars wish to premise their conclusions on the assumption that the relevant actors are perfect optimizers of their material self-interest, they bear the burden of persuasion that this assumption is realistic in the particular context that interests them.
Elsewhere, Korobkin writes:
My central thesis, which runs through the three parts of the article to follow, is that now that law and economics has discarded the “revealed preferences” assumption of neoclassical economics – that individual behavior necessarily maximizes subjective expected utility . . .
This isn’t the rational choice argument; this barely suffices as a caricature of the rational choice assumption underlying conventional microeconomic analysis. Korobkin falls victim to the all-too-common misunderstanding that the rational choice assumption is a descriptive assumption about each individual’s behavior. Not only is that obviously incorrect, and I suspect Korobkin knows it; anyone with even a passing familiarity with rational choice literature realizes that a host of economists — Friedman, Becker, Stigler, and Alchian, to name a few — have long been interested in, understood, and incorporated irrational economic behavior into microeconomics. The rational choice assumption has never been about describing the individual decision-making processes of economic agents. Perhaps a model with a different assumption, e.g. that all individuals exhibit loss aversion or optimism bias (or half of them, or a quarter, or whatever), will offer greater predictive power. Perhaps not. Economists all agree that predictive power is the criterion for model selection. That is the right debate to have (see, e.g., here), not whether law professors find uses for the behavioral approach to argue for various forms of paternalistic intervention — and, for note, is still the case that this literature is used nearly uniformly for such purposes by law professors. Korobkin’s method of declaring methodological victory on the behalf of behavioral law and economics while failing to accurately describe rational choice economics is a little bit like challenging your rival to “take it outside,” and then remaining inside and gloating about your victory while he is waiting for the fight outside.
Korobkin defends his provocative declaration of victory with the argument that it allows him to “avoid an extended discussion” of a number of claims he has already deemed appropriate to dismiss (mostly through conventional strawman approaches) in favor of focusing on new and exciting challenges for the behaviorists. I offer two observations on the so-called benefits of declaring victory while the battle is still being waged. The first is that avoiding evidence-based debate is a bug rather than a feature from the perspective of scientific method. The second is a much more practical exhortation against premature celebration: you can lose while you admire the scoreboard. Anyone who has ever played sports knows it is best to “play the whistle.”
One final observation. I recall from Professor Korobkin’s website bio that he is a Stanford guy. You’d think he’d be a little bit more sensitive to the risk of losing the game while the band prematurely celebrates victory.