Law School Rankings and Per Capita Downloads

Josh Wright —  14 March 2007

Brian Leiter has posted, with all the caveats that go along with using SSRN downloads to rank faculties, a new set of rankings using downloads for the past 12 months. Leiter lists the top 15 by total downloads and new papers in 2006 along with the share of total downloads attributable to the top 3 authors.  The first thoughts that crossed my mind when I saw these rankings were: (1) I like the use of the share measure which I found informative; (2) I wonder what would happen using per capita downloads; and (3) should we be using per capita downloads for these sorts of rankings?

So I started to write up a post about using per capita download rankings and THEN read this excellent post by Ted Soto on … “Per Capita Downloads” and scrapped mine.  Seto quickly strikes to the heart of the matter regarding controlling for faculty size:

It depends on what you’re trying to measure. Obviously, if what you want to measure is average productivity, you have to divide whatever productivity measure you’re using by the number of bodies. There’s no getting around it.

Not much to argue with there.  The appropriateness of controlling for faculty size surely depends on the function of the rankings.  But let’s pretend assume that these rankings are for the students.  At least partially.  What if the rankings are designed to help students figure out what schools have better faculties rather than who has more superstars (or which of them has made the greatest impact)?  There are plenty of avenues for figuring out who the superstars in any given field are.  My sense is that law school is not like graduate study in economics or other fields where graduate students may make their decision entirely based upon working with a particular mentor.  Prospective students wants to know who is productive, but they also want to know whether the faculty is on average, one that is having a scholarly impact.

My initial thought was that I would whip up some rankings using the per capita measures and see how they change.  Ted’s post has convinced me this isn’t a great idea (ok, I spent an hour doing it anyway but am not going to post it without making the corrections for faculty size) because of the measurement problems involved in counting faculty members. And indeed, the rankings are very sensitive to the number of faculty members in the denominator of these measures. To do this properly, one must account for clinical, emeriti, students, adjuncts, visitors, etc.  This is a pretty big problem.  As Seto writes:

Unfortunately, there is no appropriate standard measure of the size of a law school faculty. The ABA’s measure is the only standard measure of which I am aware, but it includes adjuncts, clinical faculty, and emeriti, at least on a fractional basis. In addition, few seem to know how to follow the ABA’s counting rules. As I have noted elsewhere (see Understanding the U.S. News Law School Rankings at 13), a majority of schools compute and report to U.S. News student/faculty ratios inconsistent with those computed by the ABA. For U.S. News purposes, law schools have an interest in overstating faculty size; some undoubtedly do so.

Instead, inspired by Leiter’s share measure and Seto’s post, I offer some thoughts on statistics I think would very useful to have if we assume that these rankings are at least in part produced for the consumption of prospective law school students:

  1. A Concentration Index Faculty Productivity. Leiter’s concentration measure gives us some information about the distribution of downloads. Specifically, the distribution attributable to the top 3 SSRN authors (note: Seto has another post with rankings removing the top 3 SSRN authors here). Readers with an antitrust background will be familiar with the use of concentration indices such as the C4 or C8 (the market share of the top 4 or 8 firms, respectively), which have now been replaced by the HHI (the Herfindahl-Hirschman Index). The HHI is calculated by squaring the market share of each firm competing in the market and then summing the resulting numbers. For example, for a market consisting of four firms with shares of thirty, thirty, twenty and twenty percent, the HHI is 2600 (302 + 302 + 202 + 202 = 2600). HHI’s therefore range from 0 (an infinite number of firms with shares approximating zero) to 10,000 (a true monopoly). Merger policy decisions are then (sometimes) based on the level and change in HHI that would be generated by a proposed merger. The conceptual idea of the HHI rather than the C4 or related measures is that the HHI tells us MORE about the distribution of output (in our case downloads) than adding up the shares of the the top 3, 4, or 8 firms (authors). I understand that the share of the top 3 SSRN authors does tell us something interesting and helps us to identify outliers. But if what we are after is a measure that tells us how much of the law schools productivity is concentrated in the work of a few authors, I think an HHI for law school faculties might be a very useful statistic.
  2. How Many Zeros? Seto’s post points out that it is really difficult to get the denominator right using per capita measures because faculty size data are an unreliable (and moving) target.  The measure above would be useful in telling students information about the concentration of downloads, but might students also want to know how many faculty members don’t have new papers in the last year, 2 years, 5 years?  What share of the faculty are “unproductive” in this sense?  I would imagine that students might want to know this information and that it might be more telling than looking at total download figures.

I think you might actually learn more about faculty productivity and impact by watching these measures over time than anything else.  But these are two I would like to see.  What else might students want to know about faculty productivity?   How about the faculty themselves?  And how would you measure it?

4 responses to Law School Rankings and Per Capita Downloads

  1. 

    Forgive me for being colored by my amazement at the general overemphasis on ranking and measuring in our profession (e.g., cite-o-meter). The question really comes down to how to measure faculty quality vis-a-vis what students want (and I do not accuse you of ignoring this question). What does faculty scholarship have to do with it? If a faculty is measured to be the most productive or have the most impact, how does that help the students? Does the productivity or impact translate into better training for the future lawyer? I know my and many other professors’ teaching takes a hit when we are deep into writing an article.

    Or are people simply ending up making the leap that a faculty that is prestigious helps the students because it makes the law school more prestigious which helps the students get more (prestigious) jobs?

    Students want job opportunities, so part of this concern about prestige is certainly valid, but I know myself that after graduating from a prestigious law school where I ranked in the top 15 percent of my class, I had to engage in an enormous amount of self-education afterward while in practice that I know some of my peers at my firm from *lesser* schools did not have to engage in.

    The question is whether I learned that ability to self-educate from my prestigious faculty? Can’t answer that one, at least not this morning.

  2. 

    Sick and peurile, huh? I’ve thought it over. And I plead not guilty.

    Perhaps you missed the point of the post or I did not make it sufficiently clear. The post was aimed at discussing potential measures of faculty productivity/ quality that are helpful to STUDENTS. Though I do not deny that faculties are also interested for their own (quite different) reasons. I had hoped this was clear from the post. If it is not, I hope that it is so now.

    Surely you do not find offering better information to prospective law students to be “sick and puerile” (honestly though anon, I am at a bit of a loss about the “sick” part even granting a liberal misreading of the post …).

    Here are some things that I think are not in dispute: (1) students care about faculty quality when they make law school decisions; (2) SSRN downloads are a measure likely to be relied upon by students for this purpose; (3) there are well known problems with these measures; (4) producing a better measure may be a service that makes law school students better off by enabling a more informed decision.

    Students, no matter what I say, or how much I read about chasing the vanities and cleanse my soul of my propensity to think about rankings and statistics (which I really enjoy, btw) … will use the best information available. I think there are other statistics that could be used to produce a more informative measure. Some involve controlling for faculty size, others (e.g., the concentration measure) do not. There are other complications that could also be resolved to produce a better measure of faculty quality that I do not discuss here. But I find it fairly obvious that there are potentially fruitful uses of accurate measures of faculty quality.

    Finally, Leiter can obviously fend for himself, but to my knowledge he has been one of the most outspoken critics of using SSRN downloads to measure faculty quality. But I presume that he would probably disagree, as I do, with your assertions that faculty quality is not at all measurable and that it would not be worth the time to do so if it were measurable.

  3. 

    This all seems sort of sick and puerile. Why the focus on status, rank, etc.? Is it that important? To what end? You (and Leiter) are trying to measure what is in fact not measurable.

    Would it be a better to spend our time trying to make an impact (not to mention enjoying our short lives (in part, of course, by doing our work) and the people around us), rather than trying to measure the impact we’ve made, and the impact of others, and to rank these people?

    Did Beethoven or Mozart do this? Cardozo?

    Is this what legal academia has come to?

    You might want to read some of the wisdom of the ages about chasing vanities …

Trackbacks and Pingbacks:

  1. TRUTH ON THE MARKET » Bill Henderson is One Reason I Read Blogs - March 25, 2007

    […] readers of TOTM know, I am interested in law school rankings. So recently I’ve been reading about the Vault “underrated” law school rankings […]