I am the co-editor of the Supreme Court Economic Review, a peer-review publication that is one of the country’s top-rated law and economics journals, along with my colleagues Todd Zywicki and Ilya Somin. SCER, along with its publisher, the University of Chicago Press, have put together a new submissions website. If you have a relevant submission, please submit at the website for our review.
Archives For legal scholarship
I’ve posted to SSRN an article written for the Antitrust Law Journal symposium on the Neo-Chicago School of Antitrust. The article is entitled “Abandoning Chicago’s Antitrust Obsession: The Case for Evidence-Based Antitrust,” and focuses upon what I believe to be a central obstacle to the continued evolution of sensible antitrust rules in the courts and agencies: the dramatic proliferation of economic theories which could be used to explain antitrust-relevant business conduct. That proliferation has given rise to a need for a commitment to develop sensible criteria for selecting among these theories; a commitment not present in modern antitrust institutions. I refer to this as the “model selection problem,” describe how reliance upon shorthand labels and descriptions of the various “Chicago Schools” have distracted from the development of solutions to this problem, and raise a number of promising approaches to embedding a more serious commitment to empirical testing within modern antitrust.
Here is the abstract.
The antitrust community retains something of an inconsistent attitude towards evidence-based antitrust. Commentators, judges, and scholars remain supportive of evidence-based antitrust, even vocally so; nevertheless, antitrust scholarship and policy discourse continues to press forward advocating the use of one theory over another as applied in a specific case, or one school over another with respect to the class of models that should inform the structure of antitrust’s rules and presumptions, without tethering those questions to an empirical benchmark. This is a fundamental challenge facing modern antitrust institutions, one that I call the “model selection problem.” The three goals of this article are to describe the model selection problem, to demonstrate that the intense focus upon so-called schools within the antitrust community has exacerbated the problem, and to offer a modest proposal to help solve the model selection problem. This proposal has two major components: abandonment of terms like “Chicago School,” “Neo-Chicago School,” and “Post-Chicago School,” and replacement of those terms with a commitment to testing economic theories with economic knowledge and empirical data to support those theories with the best predictive power. I call this approach “evidence-based antitrust.” I conclude by discussing several promising approaches to embedding an appreciation for empirical testing more deeply within antitrust institutions.
I would refer interested readers to the work of my colleagues Tim Muris and Bruce Kobayashi (also prepared for the Antitrust L.J. symposium) Chicago, Post-Chicago, and Beyond: Time to Let Go of the 20th Century, which also focuses upon similar themes.
Below is a graph illustrating the number of citations to selected antitrust publications in federal courts from 2003 – 2011. The full study is available on the Antitrust Source website and updates previous data collected by Jonathan Baker on behalf of the Antitrust Law Journal Editorial Board.
Disclosure: I am a member of the Antitrust Law Journal Editorial Board and the Editorial Advisory Board for Competition Policy International’s Antitrust Chronicle. Special thanks to my research assistant Stephanie Greco for her work on this.
Forbes interviews my colleague and office neighbor David Schleicher on his new and very interesting paper, City Unplanning. This paper continues Schleicher’s interesting line of research on the law and economics of cities with a creative and powerful analysis of the political economy of zoning in big cites.
Here’s a brief snippet from the start of the interview:
For starters, how about a brief rundown of your story of why housing in major cities is so expensive.
Generations of scholars assumed that, while exclusive suburbs use zoning rules to limit development to keep people out and to increase the average value of housing, big cities don’t do that kind of thing because they are run by “growth machines” or ever more powerful coalitions of developers and the politicians who love them.
But in fact for most of the Twentieth Century, when urban housing prices went up, people starting building housing and prices went down. But, at some point, this broke down.
In a number of big cities, new housing starts seem uncorrelated or only weakly correlated with housing prices and the result of increasing demand while holding supply steady is that price went up fast. The average cost of a Manhattan apartment is now over $1.4 million and the average monthly rent is over $3,300.
The only explanation is that zoning rules stop supply from increasing in the face of rising demand. (In case you are wondering, this not a bubble phenomenon—this happened in many cities before the housing bubble, and the behavior of housing markets during and after the crisis is completely consistent with a story about big city housing supply constraints.) And it’s not like real estate developers suddenly became political weaklings. What gives?
The key to my story is that urban legislatures don’t have competitive local parties—we don’t see big city legislatures divided between Republicans and Democrats, each trying to create a localized brand for competence on local issues. Instead, most local legislatures are either non-partisan or dominated by one party.
As a result, there is no one with the power and incentives to strike deals between legislators in order to promote things that are good for people across the city. And there is no one to decide the order in which issues are decides, which matters when legislative preferences “cycle,” or there are majorities that prefer a to b, b to c, and c to a.
The result of the lack of competitive local parties is that procedural rules matter a lot—they set the voting order, which can determine the outcome.
Part II of the interview is available here. The abstract is here:
Generations of scholarship on the political economy of zoning have tried to explain a world in which tony suburbs run by effective homeowner lobbies use zoning to keep out development, but big cities allow relatively untrammeled growth because of the political influence of developers. Further, this literature has assumed that, while zoning restrictions can cause “micro-misallocations” inside a metropolitan region, they cannot increase housing prices throughout a region because some of the many local governments in a region will allow development. But these theories have been overtaken by events. Over the past few decades, land use restrictions have driven up housing prices in the nation’s richest and most productive regions, resulting in massive changes in where in America people live and reducing the growth rate of the economy. Further, as demand to live in them has increased, many of the nation’s biggest cities have become responsible for substantial limits on development. Although developers are, in fact, among the most important players in city politics, we have not seen enough growth in the housing supply in many cities to keep prices from skyrocketing.
This paper seeks to explain these changes with a story about big city land use that places the legal regime governing land use decisions at its center. Using the tools of positive political theory, I argue that, in the absence of strong local political parties, land use law sets the voting order in local legislatures, determining policy from potentially cycling preferences. Specifically, these laws create a peculiar procedure, a form of seriatim decision-making in which the intense preferences of local residents opposed to re-zonings are privileged against more weakly-held citywide preferences for an increased housing supply. Without a party leadership to organize deals and whip votes, legislatures cannot easily make deals for generally-beneficial legislation stick. Legislators, who may have preferences for building everywhere to not building anywhere, but stronger preferences for stopping construction in their districts, “defect” as a matter of course and building is restricted everywhere. Further, the seriatim nature of local land use procedure results in a large number of “downzonings,” or reductions in the ability of landowners to build “as of right”, as big developers do not have an incentive to fight these changes. The cost of moving amendments through the land use process means that small developers cannot overcome the burdens imposed by downzonings, thus limiting incremental growth in the housing stock.
Finally, the paper argues that, as land use procedure is the problem, procedural reform may provide a solution. Land use and international trade have similarly situated interest groups. Trade policy was radically changed, from a highly protectionist regime to a largely free trade one, by the introduction of procedural reforms like the Reciprocal Trade Agreements Act, adjustment assistance, and “safeguards” measures. The paper proposes changes to land use procedures that mimic these reforms. These changes would structure voting order and deal-making in local legislatures in a way that would create support for increases in the urban housing supply.
May 21-25 the GMU LEC will be hosting its Workshop on Empirical Methods for Law Professors once again this year. Applications are available at the links below — and more information is available here.
The Workshop on Empirical Methods for Law Professors is designed to teach law professors the conceptual and practical skills required to (1) understand and evaluate others’ empirical studies, and (2) design and implement their own empirical studies. Participants are not expected to have background in statistical knowledge or empirical skills prior to enrollment. Instructors have been selected in part to demonstrate the development of empirical studies in a wide-range of legal and institutional settings including: antitrust, business law, bankruptcy, class actions, contracts, criminal law and sentencing, federalism, finance, intellectual property, and securities regulation. Class sessions will provide participants opportunities to learn through faculty lectures, drawing upon data and examples for cutting edge empirical legal studies, and participating in experiments. There will be numerous opportunities for participants to discuss their own works-in-progress or project ideas with the instructors.
Eric Helland, Ph.D., Claremont-McKenna College
Jonathan Klick, J.D., Ph.D., University of Pennsylvania School of Law
Bruce Kobayashi, Ph.D., George Mason University School of Law
Joshua Wright, J.D., Ph.D., George Mason University School of Law
The workshop will take place at:
George Mason University School of Law
3301 N. Fairfax Drive
Arlington, VA 22201
The Workshop will begin on Monday May 21, at 8:30 a.m. and conclude on Friday May 25, at 12:00 pm. Classes on May 21 – 24 will run from 8:30 am to 4:30 pm, and include lectures and applied “hands-on” sessions. On May 25, the participants will have an opportunity to present their own empirical projects or “works in progress” and receive feedback from instructors and other participants.
Topics covered include:
• Research Design
• Finding Data
• Basic Probability Theory
• Descriptive Statistics
• Formulating Testable Hypotheses
• Statistical Inference
• Cross-Sectional Regression
• Time Series Regression
• Panel Data Techniques
• Sensitivity Analysis
• Experimental Methods
REGISTRATION AND TUITION:
Tuition for the Workshop on Empirical Methods is $1000 (with a discounted rate of $850 if received by April 1, 2012) for the first professor from a law school and $600 for additional registrants from the same school.
Peter Klein offers up some thoughts on “reference bloat” in academic journals:
Nature News (via Bronwyn Hall):
One in five academics in a variety of social science and business fields say they have been asked to pad their papers with superfluous references in order to get published. The figures, from a survey published today in Science, also suggest that journal editors strategically target junior faculty, who in turn were more willing to acquiesce.
I think reference bloat is a problem, particularly in management journals (not so much in economics journals). Too many papers include tedious lists of references supporting even trivial or obvious points. It’s a bit like blog entries that ritually link every technical term or proper noun to its corresponding wikipedia entry. “Firms seek to position themselves and acquire resources to achieve competitive advantage (Porter, 1980; Wernerfelt, 1984; Barney, 1986).” Unless the reference is non-obvious, narrowly linked to a specific argument, etc., why include it? Readers can do their Google Scholar searches if needed.
In management this strikes me as a cultural issue, not necessarily the result of editors or reviewers wanting to build up their own citation counts. But I’d be curious to hear about reader’s experiences, either as authors or (confession time!) editors or reviewers.
With all due respect to management journals for requiring citations for authority that water runs downhill, demand curves slope downward and so forth, I’ve got my money on the law reviews.
I am pleased to pass along the following information regarding Olin-Smith-Searle Fellowships for the upcoming 2012-13 academic year. The application deadline is March 15, 2012.
2012 – 2013
The Olin-Searle-Smith Fellows in Law program will offer top young legal thinkers the opportunity to spend a year working full time on writing and developing their scholarship with the goal of entering the legal academy. Up to three fellowships will be offered for the 2012-2013 academic year.
A distinguished group of academics will select the Fellows. Criteria include:
- Dedication to teaching and scholarship
- A J.D. and extremely strong academic qualifications (such as significant clerkship or law review experience)
- Commitment to the rule of law and intellectual diversity in legal academia
- The promise of a distinguished career as a legal scholar and teacher
Stipends will include $50,000 plus benefits. While details will be worked out with the specific host school for the Fellow, in general the Fellow will be provided with an office and will be included in the life of the school. Fellows are not expected to hold other employment during the term of their fellowships.
All those who feel they fit the criteria are encouraged to apply. Applicants should submit the following:
- A resume and law school transcript
- Academic writing sample(s) with an approximately 50-page limit on the total number of pages submitted (i.e. two 25-page pieces are fine, two 50-page pieces are not)
- A brief discussion of their areas of intellectual interest (approximately 2 pages)
- A statement of their commitment to teaching law
- At least two and generally no more than three letters of support. These should come from people who can speak to your academic potential and should generally include at least two letters from law professors. If you are doing interdisciplinary work a letter from someone who can speak to your work in that area is also helpful. You may also include additional references with phone numbers.
Applications must be received no later than March 15, 2012.
Applicants will be notified in early to mid-May 2012.
Please submit applications to:
Olin-Searle-Smith Fellows in Law Program
ATTN: Tyler Lowe
c/o The Federalist Society
1015 18th Street, N.W., Suite 425
Washington, D.C. 20036
Or send an email to email@example.com with “Olin-Searle-Smith” in the subject line.
In a thorough and convincing paper, “The FTC’s Proposal for Regulating IP through SSOs Would Replace Private Coordination with Government Hold-Up,” Richard Epstein, Scott Kieff and Dan Spulber assess and then decimate the FTC’s proposal on patent notice and remedies, “The Evolving IP Marketplace: Aligning Patent Notice and Remedies with Competition.” Note Epstein, Kieff and Spulber:
In its recent report entitled “The Evolving IP Marketplace,” the Federal Trade Commission (FTC) advances a far‐reaching regulatory approach (Proposal) whose likely effect would be to distort the operation of the intellectual property (IP) marketplace in ways that will hamper the innovation and commercialization of new technologies. The gist of the FTC Proposal is to rely on highly non-standard and misguided definitions of economic terms of art such as “ex ante” and “hold-up,” while urging new inefficient rules for calculating damages for patent infringement. Stripped of the technicalities, the FTC Proposal would so reduce the costs of infringement by downstream users that the rate of infringement would unduly increase, as potential infringers find it in their interest to abandon the voluntary market in favor of a more attractive system of judicial pricing. As the number of nonmarket transactions increases, the courts will play an ever larger role in deciding the terms on which the patents of one party may be used by another party. The adverse effects of this new trend will do more than reduce the incentives for innovation; it will upset the current set of well-‐functioning private coordination activities in the IP marketplace that are needed to accomplish the commercialization of new technologies. Such a trend would seriously undermine capital formation, job growth, competition, and the consumer welfare the FTC seeks to promote.
Focusing in particular on SSOs, the trio homes in on the potential incentive problem created by the FTC’s proposal:
The central problem with the FTC’s approach is that it would interfere seriously with the helpful incentives all parties in the IP marketplace presently have to contract with each other. The FTC’s approach ignores the powerful incentives that it creates in putative licenses to spurn the voluntary market in order to obtain a strategic advantage over the licensor. In any voluntary market, the low rates that go to initial licensees reflect the uncertainty of the value of the patented technology at the time the license is issued. Once that technology has proven its worth, there is no sound reason to allow any potential licensee who instead held out from the originally offered deal to get bargain rates down the road. Allowing such an option would make the holdout better off than the contracting party. Such holdouts would not need to take licenses for technologies with low value, while resting assured they would still get technologies with high value at below market rates. The FTC seems to overlook that a well-‐functioning patent damage system should do more than merely calibrate damages after the fact. An efficient approach to damages is one that also reduces the number of infringements overall by making sure that the infringer cannot improve his economic position by his own wrong.
The FTC Proposal rests on the misguided conviction that the law should not allow a licensor to “demand and obtain royalty payments based on the infringer’s switching costs” once the manufacturer has “sunk costs into using the technology;” and it labels any such payments as the result of “hold-up.”
As Epstein, et al. discuss, current private ordering (reciprocal dealing, repeat play, RAND terms, etc.) works perfectly well to address real hold-up problems, and the FTC seems to be both defining the problem oddly and, thus, creating a problem that doesn’t really exist.
Although not discussed directly, the paper owes a great deal to the great Ben Klein and especially his paper, Why Hold-Ups Occur: The Self-Enforcing Range of Contractual Relationships (to say nothing of Klein, Crawford & Alchian, of course). Likewise, although not discussed in the paper, Josh and Bruce Kobayashi’s excellent paper, Federalism, Substantive Preemption and Limits on Antitrust: An Application to Patent Holdup is an essential precursor to this paper, addressing the comparative merits of antitrust and contract-based evaluation of claimed patent holdups in SSOs.
Highly-recommended and an important addition to the ever-interesting antitrust/IP discussion.
Call for Papers Announcement
AALS Section on Antitrust and Economic Regulation
AALS Section on Law & Economics
Behavioral Economics & Antitrust Law
January 5-8, 2012
2012 AALS Annual Meeting
The AALS Section on Antitrust and Economic Regulation and the Section on Law & Economics will hold a joint program on Behavioral Economics and Antitrust Law during the AALS 2012 Annual Meeting in Washington, DC. The program will focus on the influence of Behavioral Economics on Antitrust Law and Policy. Behavioral economics, which examines how individual and market behavior are affected by deviations from the rationality assumptions underlying conventional economics, has generated significant attention from both academics and policy makers. The program will feature presentations by leading scholars who have addressed how behavioral economics impacts antirust law and policy. Confirmed panelists include Maurice Stucke (University of Tennessee), Steve Salop (Georgetown University), Avishalom Tor (Haifa University), and Josh Wright (George Mason University). We are looking to add at least one additional panelist through this call for papers.
Those with an interest in the subject are encouraged to submit a draft paper or proposal via email to Bruce H. Kobayashi, at firstname.lastname@example.org by September 1, 2011.
Faculty members of AALS member and fee-paid law schools are eligible to submit papers. Foreign, visiting, and adjunct faculty members, graduate students, and fellows are not eligible to submit.
Registration Fee and Expenses:
Call for Paper participants will be responsible for paying their annual meeting registration fee and travel expenses.
How will papers be reviewed?
Paper will be selected after review of submissions by members of the Executive Committee of the AALS Section on Antirust and Economic Regulation and the AALS Section on Law & Economics. This committee consists of Scott Hemphill (Columbia Law School), Bruce H. Kobayashi (George Mason University Law School), Michael A. Carrier (Rutgers University School of Law), Darren Bush (University of Houston Law Center), D. Daniel Sokol (University of Florida Levin College of Law), Daniel A. Crane (University of Michigan Law School), and Hillary Greene (University of Connecticut School of Law).
Will program be published in a Journal?
Yes, as a symposium in the Journal of Law, Economics & Policy.
Deadline date for submission:
September 1, 2011. Decisions will be announced by September 30, 2011.
Program Date and Time:
Friday January 6, 2012, 10:30am-12:15pm.
Contact for submission and inquires:
Bruce H. Kobayashi
Chair, AALS Section on Antitrust and Economic Regulation
George Mason Law School
3301 Fairfax Drive
Arlington, VA 22201
There is lots of talk about the various implications of the agreement between the various law reviews to cease and desist with the practice of exploding offers. One interesting aspect of the commitment is that it is fairly transparent that the law reviews viewed exploding offers as a method of competing with one another, and the agreement seeks to replace that rivalry with cooperation. The letter, for example, describes the motivation for exploding offers as an attempt to “secure the best articles for our own journal,” which instead led to a “race to the bottom.” It was not too long ago that Thom posted about the antitrust risks associated with collective action aimed at pulling out of the US News rankings. As a practical matter, I don’t view this commitment as amounting to much, nor do I have a problem with exploding offers as a competitive strategy. But in the spirit of final exam season: does the agreement articulated in the Joint Letter violate Section 1 of the Sherman Act? Discuss.