Law School Rankings and Per Capita Downloads

Cite this Article
Joshua D. Wright, Law School Rankings and Per Capita Downloads, Truth on the Market (March 14, 2007), https://truthonthemarket.com/2007/03/14/law-school-rankings-and-per-capita-downloads/

Brian Leiter has posted, with all the caveats that go along with using SSRN downloads to rank faculties, a new set of rankings using downloads for the past 12 months. Leiter lists the top 15 by total downloads and new papers in 2006 along with the share of total downloads attributable to the top 3 authors.  The first thoughts that crossed my mind when I saw these rankings were: (1) I like the use of the share measure which I found informative; (2) I wonder what would happen using per capita downloads; and (3) should we be using per capita downloads for these sorts of rankings?

So I started to write up a post about using per capita download rankings and THEN read this excellent post by Ted Soto on … “Per Capita Downloads” and scrapped mine.  Seto quickly strikes to the heart of the matter regarding controlling for faculty size:

It depends on what you’re trying to measure. Obviously, if what you want to measure is average productivity, you have to divide whatever productivity measure you’re using by the number of bodies. There’s no getting around it.

Not much to argue with there.  The appropriateness of controlling for faculty size surely depends on the function of the rankings.  But let’s pretend assume that these rankings are for the students.  At least partially.  What if the rankings are designed to help students figure out what schools have better faculties rather than who has more superstars (or which of them has made the greatest impact)?  There are plenty of avenues for figuring out who the superstars in any given field are.  My sense is that law school is not like graduate study in economics or other fields where graduate students may make their decision entirely based upon working with a particular mentor.  Prospective students wants to know who is productive, but they also want to know whether the faculty is on average, one that is having a scholarly impact.

My initial thought was that I would whip up some rankings using the per capita measures and see how they change.  Ted’s post has convinced me this isn’t a great idea (ok, I spent an hour doing it anyway but am not going to post it without making the corrections for faculty size) because of the measurement problems involved in counting faculty members. And indeed, the rankings are very sensitive to the number of faculty members in the denominator of these measures. To do this properly, one must account for clinical, emeriti, students, adjuncts, visitors, etc.  This is a pretty big problem.  As Seto writes:

Unfortunately, there is no appropriate standard measure of the size of a law school faculty. The ABA’s measure is the only standard measure of which I am aware, but it includes adjuncts, clinical faculty, and emeriti, at least on a fractional basis. In addition, few seem to know how to follow the ABA’s counting rules. As I have noted elsewhere (see Understanding the U.S. News Law School Rankings at 13), a majority of schools compute and report to U.S. News student/faculty ratios inconsistent with those computed by the ABA. For U.S. News purposes, law schools have an interest in overstating faculty size; some undoubtedly do so.

Instead, inspired by Leiter’s share measure and Seto’s post, I offer some thoughts on statistics I think would very useful to have if we assume that these rankings are at least in part produced for the consumption of prospective law school students:

  1. A Concentration Index Faculty Productivity. Leiter’s concentration measure gives us some information about the distribution of downloads. Specifically, the distribution attributable to the top 3 SSRN authors (note: Seto has another post with rankings removing the top 3 SSRN authors here). Readers with an antitrust background will be familiar with the use of concentration indices such as the C4 or C8 (the market share of the top 4 or 8 firms, respectively), which have now been replaced by the HHI (the Herfindahl-Hirschman Index). The HHI is calculated by squaring the market share of each firm competing in the market and then summing the resulting numbers. For example, for a market consisting of four firms with shares of thirty, thirty, twenty and twenty percent, the HHI is 2600 (302 + 302 + 202 + 202 = 2600). HHI’s therefore range from 0 (an infinite number of firms with shares approximating zero) to 10,000 (a true monopoly). Merger policy decisions are then (sometimes) based on the level and change in HHI that would be generated by a proposed merger. The conceptual idea of the HHI rather than the C4 or related measures is that the HHI tells us MORE about the distribution of output (in our case downloads) than adding up the shares of the the top 3, 4, or 8 firms (authors). I understand that the share of the top 3 SSRN authors does tell us something interesting and helps us to identify outliers. But if what we are after is a measure that tells us how much of the law schools productivity is concentrated in the work of a few authors, I think an HHI for law school faculties might be a very useful statistic.
  2. How Many Zeros? Seto’s post points out that it is really difficult to get the denominator right using per capita measures because faculty size data are an unreliable (and moving) target.  The measure above would be useful in telling students information about the concentration of downloads, but might students also want to know how many faculty members don’t have new papers in the last year, 2 years, 5 years?  What share of the faculty are “unproductive” in this sense?  I would imagine that students might want to know this information and that it might be more telling than looking at total download figures.

I think you might actually learn more about faculty productivity and impact by watching these measures over time than anything else.  But these are two I would like to see.  What else might students want to know about faculty productivity?   How about the faculty themselves?  And how would you measure it?