Archives For James Bessen

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

In a response to my essay, The Trespass Fallacy in Patent Law, in which I explain why patent scholars like Michael Meurer, James Bessen, T.J. Chiang and others are committing the nirvana fallacy in their critiques of the patent system, my colleague, T.J. Chiang writes at PrawfsBlawg:

The Nirvana fallacy, at least as I understand it, is to compare an imperfect existing arrangement (such as the existing patent system) to a hypothetical idealized system. But the people comparing the patent system to real property—and I count myself among them—are not comparing it to an idealized fictional system, whether conceptualized as land boundaries or as estate boundaries. We are saying that, based on our everyday experiences, the real property system seems to work reasonably well because we don’t feel too uncertain about our real property rights and don’t get into too many disputes with our neighbors. This is admittedly a loose intuition, but it is not an idealization in the sense of using a fictional baseline. It is the same as saying that the patent system seems to work reasonably well because we see a lot of new technology in our everyday experience.

I would like to make two quick points in response to T.J.’s attempt at wiggling out from serving as one of the examples I identify in my essay as a patent scholar who uses trespass doctrine in a way that reflects the nirvana fallacy.

First, what T.J. describes as what he is doing — comparing an actual institutional system to a “loose intuition” about another institutional system — is exactly what Harold Demsetz identified as the nirvana fallacy (when he roughly coined the term in 1969).  When economists or legal scholars commit the nirvana fallacy, they always justify their idealized counterfactual standard by appeal to some intuition or gestalt sense of the world; in fact, Demsetz’s example of the nirvana fallacy is when economists have a loose intuition that regulation always works perfectly to fix market failures.  These economists do this for the simple reason that they’re social scientists, and so they have to make their critiques seem practical.

It’s like the infamous statement by Pauline Kael in 1972 (quoting from memory): “I can’t believe Nixon won, because I don’t know anyone who voted for him.” Similarly, what patent scholars like T.J. are doing is saying: “I can’t believe that trespass isn’t clear and efficient, because I don’t know anyone who has been involved in a trespass lawsuit or I don’t hear of any serious trespass lawsuits.”  Economists or legal scholars always have some anecdotal evidence — either personal experiences or merely an impressionistic intuition about other people — to offer as support for their counterfactual by which they’re evaluating (and criticizing) the actual facts of the world. The question is whether such an idealized counterfactual is a valid empirical metric or not; of course, it is not.  To do this is exactly what Demsetz criticized as the nirvana fallacy.

Ultimately, no social scientist or legal scholar ever commits the “nirvana fallacy” as T.J. has defined it in his blog posting, and this leads to my second point.  The best way to test T.J.’s definition is to ask: Does anyone know a single lawyer, legal scholar or economist who has committed the “nirvana fallacy” as defined by T.J.?  What economist or lawyer appeals to a completely imaginary “fictional baseline” as the standard for evaluating a real-world institution?

The answer to this question is obvious.  In fact, when I posited this exact question to T.J. in an exchange we had before he made his blog posting, he could not answer it.  The reason why he couldn’t answer it is because no one says in legal scholarship or in economic scholarship: “I have a completely made-up, imaginary ‘fictionalized’ world to which I’m going to compare to a real-world institution or legal doctrine.”  This is certainly is not the meaning of the nirvana fallacy, and I’m fairly sure Demsetz would be surprised to learn that he identified a fallacy that according to T.J. has never been committed by a single economist or legal scholar. Ever.

In sum, what T.J. describes in his blog posting — using a “loose intuition” of an institution an empirical standard for critiquing the operation of another institution — is the nirvana fallacy. Philosophers may posit completely imaginary and fictionalized baselines — it’s what they call “other worlds” — but that is not what social scientists and legal scholars do.  Demsetz was not talking about philosophers when he identified the nirvana fallacy.  Rather, he was talking about exactly what T.J. admits he does in his blog posting (and which he has done in his scholarship).

Thank you to Josh for inviting me to guest blog on Truth on the Market.  As my first blog posting, I thought TOTM readers would enjoy reading about my latest paper that I posted to SSRN, which has been getting some attention in the blogosphere (see here and here).  It’s a short, 17-page essay — see, it is possible that law professors can write short articles — called, The Trespass Fallacy in Patent Law.

This essay responds to the widely-heard cries today that the patent system is broken, as expressed in the popular press and by tech commentators, legal academics, lawyers, judges, congresspersons and just about everyone else.  The $1 billion verdict issued this past Friday against Samsung in Apple’s patent infringement lawsuit, hasn’t changed anything. (If anything, Judge Richard Posner finds the whole “smart phone war” to be Exhibit One in the indisputable case that the patent system is broken.)

Although there are many reasons why people think the patent system is systemically broken, one common refrain is that patents fail as property rights because patent infringement doctrine is not as clear, determinate and efficient as trespass doctrine is for real estate. Thus, the explicit standard that is invoked to justify why we must fix patent boundaries — or the patent system more generally — is that the patent system does not work as clearly and efficiently as fences and trespass doctrine do in real property. As Michael Meurer and James Bessen explicitly state in their book, Patent Failure: “An ideal patent system features rights that are defined as clearly as the fence around a piece of land.”

My essay explains that this is a fallacious argument, suffering both empirical and logical failings. Empirically, there are no formal studies of how trespass functions in litigation; thus, complaints about the patent system’s indeterminacy are based solely on an idealized theory of how trespass should function.  Often times, patent scholars, like my colleague, T.J. Chiang, just simply assert without any supporting evidence whatsoever that fences are “crystal clear” and thus there are “stable boundaries” for real estate; T.J. thus concludes that the patent system is working inefficiently and needs to be reformed (as captured in the very title of his article, Fixing Patent Boundaries). The variability in patent claim construction, asserts T.J. is tantamount to “the fence on your land . . . constantly moving in random directions. . . . Because patent claims are easily changed, they serve as poor boundaries, undermining the patent system for everyone.”

Other times, this idealized theory about trespass is given some credence by appeals to loose impressions or a gestalt of how trespass works, or there are appeals to anecdotes and personal stories about how well trespass functions in the real world. Bessen and Meurer do this in their book, Patent Failure, where they back up their claim that trespass is clear with a search they apparently did on Westlaw of innocent trespass cases in California in a 3-year period. Either way, assertions backed by intuitions or a few anecdotal cases cannot serve as an empirical standard by which one makes a systemic evaluation that we should shift to anther institutional arrangement because the current one is operating inefficiently. In short, the trespass standard represents the nirvana fallacy.

Even more important, anecdotal evidence and related studies suggest that trespass and other boundary disputes between landowners are neither as clear nor as determinate as patent scholars assume them to be (something I briefly summarize on in my essay and call for more empirical studies to be done).

Logically, the comparison of patent boundaries to trespass commits what philosophers would call a category mistake. It conflates the boundaries of an entire legal right (a patent), not with the boundaries of its conceptual counterpart (real estate), but rather with a single doctrine (trespass) that secures real estate only in a single dimension (geographic boundaries). As all 1Ls learn in their Property courses, real estate is not land. Accordingly, estate boundaries are defined along the dimensions of time, use and space, as represented in myriad doctrines like easements, nuisance, restrictive covenants, and future interests, among others. In fact, the overlapping possessory and use rights shared by owners of joint tenancies or by owners of possessory estates with overlapping future interests share many conceptual and doctrinal similarities to the overlapping rights that patent-owners may have over a single product in the marketplace (like a smart phone).  In short, the proper conceptual analog for patent boundaries is estate boundaries, not fences.

In sum, the trespass fallacy is driving an indeterminacy critique in patent law that is both empirically unverified and conceptually misleading, and check out my essay for much more evidence and more in-depth explanation of why this is the case.

A colleague sent along the 2011 Washington & Lee law journal rankings.  As co-editor of the Supreme Court Economic Review (along with Todd Zywicki and Ilya Somin) I was very pleased to notice how well the SCER is faring by these measures.  While these rankings should always be taken with a grain of salt or two, by “Impact Factor” here are the top 3 law journals in the “economics” sub-specialty:

  1. Supreme Court Economic Review (1.46)
  2. Journal of Legal Studies (1.31)
  3. Journal of Empirical Legal Studies (1.2)

SCER comes in third in the “Combined” rankings behind Journal of Empirical Legal Studies and the Journal of Legal Studies.

SCER is a peer-reviewed journal and operates on an exclusive submission basis.  You can take a look at our most recent volume here.  If you have an interesting law & economics piece (hint: it need not be related to a Supreme Court case) you’d like to submit, please consider us.

Submissions can be emailed to: scer@gmu.edu

UPDATE: I should also note that George Mason’s Journal of Law, Economics and Policy also ranks very well by these measures!  It is a student-run journal here at GMU Law and comes in 13th and 16th in the “economics” category by impact factor and combined ranking, respectively.

Speaking of JLEP ….

JLEP will be hosting a great symposium in conjunction with GMU’s Information Economy Project (directed by Tom Hazlett) on Friday: The Digital Inventor: How Entrepreneurs Compete on Platforms.   I have the privilege of moderating one of the panels.  But the lineup of speakers is just terrific.

  • Richard Langlois, University of Connecticut, Department of Economics 
  • Thomas Hazlett, Prof. of Law & Economics, George Mason University
  • Andrei Hagiu, Harvard Business School, Multi-Sided Platforms
  • Salil Mehra, Temple University Beasley School of Law, Platforms and the Choice of Models
  • Donald Rosenberg, Qualcomm, Inc.
  • Anne Layne-Farrar, Compass-Lexecon, The Brothers Grimm Book of Business Models: A Survey of Literature and Developments in Patent Acquisition and Litigation
  • James Bessen, Boston University School of Law, The Private Costs of Patent Litigation
  • David Teece, Haas School of Business, UC Berkeley

 This post is from Dennis Crouch(Missouri/PatentlyO)

I am enjoying Professor Carrier’s new book Innovation in the 21st Century: Harnessing the Power of Intellectual Property and Antitrust Law. I will focus my discussion here on patent issues discussed in Part III of the book.

As other commentaries have noted the book is long on conclusions and proposals but somewhat short on justifications for the conclusions. In the words of Geoff Manne: “with what seems to me to be little support (and with only essentially-anecdotal empirical support), Carrier then chooses sides.” On the patent side, Carrier rather consistently chooses sides in favor of weaker patents.

Thank you Supreme Court: Like many academics, Carrier knows that patent law circa 2006 was in a bad-state. The problems stem from the Federal Circuit and its “formalistic rules”; from “patent trolls [who] do not manufacture products and thus do not face patent infringement counterclaims, emboldening them to file lawsuits”; and from the PTO and its insufficient resources.  The pendulum had swung too far in favor of the patent applicant and litigious patent holder. In Carrier’s history, the Supreme Court at least partially saved the day by weakening patent rights in eBay (no injunctive relief), KSR (easing obviousness rules), and MedImmune (greater access to declaratory judgment actions). Seeing the light, the Federal Circuit also rolled-back the scourge of treble damages for willful infringement in a way that “promises to promote disclosure and innovation.” Because of the Supreme Court’s action, many of the proposals needed in 2006 “are no longer needed.” From an antitrust harm perspective, eBay and MedImmune are theoretically important because they help prevent potential hold-ups. We are left without any answer, however, as to whether it is worth the added litigation expense and reduced patent incentive in order to shadow box with these mythical holdups. It is interesting that the best example that Carrier provides is the NTP Blackberry case which RIM eventually settled for $600+ million. In that case, RIM had taken on the risk of a large settlement by declining early opportunities to settle. In addition, because of the competitive nature of the wireless market, there is no indication that the settlement raised prices or limited access in any way.

On KSR, my reading is that Carrier sees this case as benefiting patent quality – at least the likelihood that issued patents are valid. Later, Carrier links elimination of invalid patents with a pro-competitive benefit. (p.229). What I don’t understand is if Carrier’s argument is special to invalid patents – or is he simply saying that the marketplace would be more competitive without patent rights?

Post-Grant Opposition: Chapter 9 is devoted to a new post-grant opposition layered over the reexamination and interference procedures. Carrier’s proposal is a close parallel to the proposals in the Patent Reform Act of 2009, and I agree with his rejection of current alternatives. (1) It would be prohibitively expensive (and I would argue detrimental to innovation) to ensure that only valid patents issue on the first pass through the PTO; (2) challenging patents during litigation is expensive and financially risky; and (3) current reexamination proceedings are too limited in scope and procedure (and I would argue too slow).

I have a small problem with Carrier’s explanation of the benefits of his proposed system. He first indicates that stronger post grant review will lower prices because competitors will less often need to spend money to design around a would-be invalid patent. Then, in the next breath, Carrier promises spillover technology benefits derived from money spent on reviewing competitors patents for opposition. Of course, these two arguments are on the same coin. If money spent designing around is wasteful so is money spent reviewing the validity of patents. Likewise, if reviewing competitor’s patents leads to additional innovation, so will time spent designing around.

Carrier also notes the “antitrust benefit” that invalidated patents will no longer create any market power problems. Glaringly absent from the discussion is how the opposition proceedings would impact the innovation incentive – especially under the PTO’s current mantra favoring rejection.

Material Transfer Agreements: Carrier includes Chapter 12 on MTA’s in the patent section as well. It is an important topic, although it is unclear why it fits in patents. The closest link is that many material transfer agreements include restrictions on public disclosure and a declaration of ownership of any future patent rights. MTA’s are generally negotiated. A researcher typically wants access to some materials such as a stem cell line, seed-line, or tissue. The owner of those physical item ordinarily demands some consideration from the researcher as inducement for sharing.

Carrier’s problems with the current MTA approach appears three-fold. First, some researchers are unwilling to pay the consideration and thus cannot access the materials. Second, the negotiation has high transaction costs – including delay. And, third, the public loses when the researchers are restricted or delayed from publishing. His solution: require all agencies receiving federal funding to agree to a standard universal MTA (the UBMTA). The proposal is nice, but we really don’t know its impact. Parties that care about non-standard terms would still do side-deals — adding more complexity than before the rule. Alternatively, those parties may simply walk away because the terms are not acceptable — further limiting access to the materials.

Pricing: Finally, I have a word to say about Oxford Press. The books are great, but they are entirely too expensive. List price for this book is $65 while the Bessen Meurer book by Princeton University Press was only $30. Authors, when you negotiate you book deal, work to make sure the book is affordable.