Archives For tragedy of the anticommons

Discussion

In recent years, U.S. government policymakers have recounted various alleged market deficiencies associated with patent licensing practices, as part of a call for patent policy “reforms” – with the “reforms” likely to have the effect of weakening patent rights.  In particular, antitrust enforcers have expressed concerns that:  (1) the holder of a patent covering the technology needed to implement some aspect of a technical standard (a “standard-essential patent,” or SEP) could “hold up” producers that utilize the standard by demanding  anticompetitively high royalty payments; (2) the accumulation of royalties for multiple complementary patent licenses needed to make a product exceeds the bundled monopoly rate that would be charged if all patents were under common control (“royalty stacking”); (3) an overlapping set of patent rights requiring that producers seeking to commercialize a new technology obtain licenses from multiple patentees deters innovation (“patent thickets”); and (4) the dispersed ownership of complementary patented inventions results in “excess” property rights, the underuse of resources, and economic inefficiency (“the tragedy of the anticommons”).  (See, for example, Federal Trade Commission and U.S. Justice Department reports on antitrust and intellectual property policy, here, here, and here).

Although some commentators have expressed skepticism about the actual real world incidence of these scenarios, relatively little attention has been paid to the underlying economic assumptions that give rise to the “excessive royalty” problem that is portrayed.  Very recently, however, Professor Daniel F. Spulber of Northwestern University circulated a paper that questions those assumptions.  The paper points out that claims of economic harm due to excessive royalty charges critically rest on the assumption that individual patent owners choose royalties using posted prices, thereby generating total royalties that are above the monopoly level that would be charged for all complementary patents if they were owned in common.  In other words, it is assumed that interdependencies among complements are ignored, with individual patent monopoly prices being separately charged – the “Cournot complements” problem.

In reality, however, Professor Spulber explains that patent licensing usually involves bargaining rather than posted prices, because such licensing involves long-term contractual relationships between patentees and producers, rather than immediate exchange.  Significantly, the paper shows that bargaining procedures reflecting long-term relationships maximize the joint profits of inventors (patentees) and producers, with licensing royalties being less than (as opposed to more than under posted prices) bundled monopoly royalties.  In short, bargaining over long-term patent licensing contracts yields an efficient market outcome, in marked contrast to the inefficient outcome posited by those who (wrongly) assume patent licensing under posted prices.  In other words, real world patent holders (as opposed to the inward-looking, non-cooperative, posted-price patentees of government legend) tend to engage in highly fruitful licensing negotiations that yield socially efficient outcomes.  This finding neatly explains why examples of economically-debilitating patent thickets, royalty stacks, hold-ups, and patent anti-commons, like unicorns (or perhaps, to be fair, black swans), are amazingly hard to spot in the real world.  It also explains why the business sector that should in theory be most prone to such “excessive patent” problems, the telecommunications industry (which involves many different patentees and producers, and tens of thousands of patents), has been (and remains) a leader in economic growth and innovation.  (See also here, for an article explaining that smartphone innovation has soared because of the large number of patents.)

Professor Spulber’s concluding section highlights the policy implications of his research:

The efficiency of the bargaining outcome differs from the outcome of the Cournot posted prices model. Understanding the role of bargaining helps address a host of public policy concerns, including SEP holdup, royalty stacking, patent thickets, the tragedy of the anticommons, and justification for patent pools. The efficiency of the bargaining outcome suggests the need for antitrust forbearance toward industries that combine multiple inventions, including SEPs.

Professor Spulber’s reference to “antitrust forbearance” is noteworthy.  As I have previously pointed out (see, for example, here, here, and here), in recent years U.S. antitrust enforcers have taken positions that tend to favor the weakening of patent rights.  Those positions are justified by the “patent policy problems” that Professor Spulber’s paper debunks, as well as an emphasis on low quality “probabilistic patents” (see, for example, here) that ignores a growing body of literature (both theoretical and empirical) on the economic benefits of a strong patent system (see, for example, here and here).

In sum, Professor Spulber’s impressive study is one more piece of compelling evidence that the federal government’s implicitly “anti-patent” positions are misguided.  The government should reject those positions and restore its previous policy of respect for robust patent rights – a policy that promotes American innovation and economic growth.

Appendix

While Professor Spulber’s long paper is well worth a careful read, key italicized excerpts from his debunking of prominent “excessive patent” stories are set forth below.

SEP Holdups

Standard Setting Organizations (SSOs) are voluntary organizations that establish and disseminate technology standards for industries. Patent owners may declare that their patents are essential to manufacturing products that conform to the standard. Many critics of SSOs suggest that inclusion of SEPs in technology standards allows patent owners to charge much higher royalties than if the SEPs were not included in the standard. SEPs are said to cause a form of “holdup” if producers using the patented technology would incur high costs of switching to alternative technologies. . . . [Academic] discussions of the effects of SEPs [summarized by the author] depend on patent owners choosing royalties using posted prices, generating total royalties above the bundled monopoly level. When IP owners and producers engage in bargaining, the present analysis suggests that total royalties will be less than the bundled monopoly level. Efficiencies in choosing licensing royalties should mitigate concerns about the effects of SEPs on total royalties when patent licensing involves bargaining. The present analysis further suggests bargaining should reduce or eliminate concerns about SEP “holdup”. Efficiencies in choosing patent licensing royalties also should help mitigate concerns about whether or not SSOs choose efficient technology standards.

Royalty Stacking

“Royalty stacking” refers to the situation in which total royalties are excessive in comparison to some benchmark, typically the bundled monopoly rate. . . . The present analysis shows that the perceived royalty stacking problem is due to the posted prices assumption in Cournot’s model. . . . The present analysis shows that royalty stacking need not occur with different market institutions, notably bargaining between IP owners and producers. In particular, with non-cooperative licensing offers and negotiation of royalty rates between IP owners and producers, total royalties will be less than the royalties chosen by a bundled monopoly IP owner. The result that total royalties are less than the bundled monopoly benchmark holds even if there are many patented inventions. Total royalties are less than the benchmark with innovative complements and substitutes.

Patent Thickets

The patent thickets view considers patents as deterrents to innovation. This view differs substantially from the view that patents function as property rights that stimulate innovation. . . . The bargaining analysis presented here suggests that multiple patents should not be viewed as deterring innovation. Multiple inventors can coordinate with producers through market transactions. This means that by making licensing offers to producers and negotiating patent royalties, inventors and producers can achieve efficient outcomes. There is no need for government regulation to restrict the total number of patents. Arbitrarily limiting the total number of patents by various regulatory mechanisms would likely discourage invention and innovation.

Tragedy of the Anticommons

The “Tragedy of the Anticommons” describes the situation in which dispersed ownership of complementary inventions results in underuse of resources[.] . . . . The present analysis shows that patents need not create excess property rights when there is bargaining between IP owners and producers. Bargaining results in a total output that maximizes the joint returns of inventors and producers. Social welfare and final output are greater with bargaining than in Cournot’s posted prices model. This contradicts the “Tragedy of the Anticommons” result and shows that there need not be underutilization of resources due to high royalties.

I respect Alex Tabarrock immensely, but his recent post on the relationship between “patent strength” and innovation is, while pretty, pretty silly. The entirety of the post is the picture I have pasted here.

The problem is that neither Alex nor anyone else actually knows that this is “where we are,” nor exactly what the relationship between innovation and patent strength is — in large part because we don’t really know what the strength of patents is.

I love, for example, when the anti-patent crowd crows about patent thickets and their alleged disastrous consequences for complex devices like smartphones — often posted to Twitter from their smart phones.  The reality is that we have smartphones and innumerable other complex products besides.  Would we have more or better ones if the patent system were different?  Maybe – show me the data.  Defining the but-for world is notoriously difficult, and I’m not saying one can’t make principled arguments about the patent system without proving a negative.  But Alex’s graph and most comments by the patent haters imply a lot more precision about what we know than we actually have.  The relevant question is the marginal one, but a lot of the criticism of the patent system seems to me to take advantage of our uncertainty to imply that the benefits of weaker patents would be practically infinite.  You can criticize patents for making complex products more difficult to bring to market on the basis of basic economic logic, but when your analysis defines costs with little more rigor than Alex’s napkin contains, your policy footing should be vanishingly small.  Frankly, as Richard Epstein points out, Judge Posner’s recent foray into this debate, although longer, is equally short on evidence.

Meanwhile Adam Mossoff has explored these issues with some compelling evidence and in great detail in his paper on the sewing machine wars of the 1850s and draws a very different lesson.

In the modern world the evidence supporting dire claims is equally weak, although you wouldn’t know it from media coverage and academic discourse that gloms on to events like the recent Apple-Samsung trial as evidence that the new Dark Ages are upon us.  Between the software Alex used to make that graph, the computer on which that software was run, and the enormous range of other innovations that brought it from his mind to my digital doorstep, we seem to be managing to produce and use an enormous amount of innovation.  What value, exactly, does Alex think is contained in the innovation delta between his red dot and the top of the curve?  As Richard Epstein put it:

Nor is there any obvious global sign of patent malaise in the software industry. Last I looked, the level of technological improvement in the electronics and software industries has continued to impress. The rise of the iPad, the rapid growth of social media, the increased use of the once humble cell phone as a mobile platform for a dizzying array of applications—these do not point to industries in their death throes. It may well be the case that a better patent system could have seen more rapid growth in technology.

I think he meant to add something like: “but we don’t know that, and we sure don’t know how much “better” and at what cost.”

Most important, the patent system just isn’t as “strong” most critics would have you believe.  Our liability regime (especially post-eBay) injects enormous uncertainty into the process.  Enforcement costs are high.  And the patent system doesn’t exist in a vacuum.  Antitrust laws, tax laws, trade laws, financial regulation, consumer protection rules, layer upon layer of regulatory oversight, etc., etc. serve to weaken these “optimal” incentives to innovate that simplistic analyses of the patent system largely assume away.  Ideally, perhaps, we’d remove all that detritus and then follow the critics’ advice.  But that isn’t going to happen, and in the meantime the interactions between various overlapping regulatory and legal rules — including the patent system itself, of course — are complex and poorly-understood.  Perhaps we could do better, but it is by no means clear that further weakening patent rights will get us there.  And in the meantime, reports of innovation’s death seem like a bit of an exaggeration.

My colleague Adam Mossoff is blogging over at the Volokh Conspiracy on his fascinating paper, A Stitch in Time: The Rise and Fall of the Sewing Machine Patent Thicket. Here’s an excerpt from the first post:

The debate centers on whether patent thicket theory accurately explains or predicts such problems in practice, and the empirical studies produced thus far are arguably in equipoise. In speaking about anticommons theory, Professor Heller acknowledges that “the empirical studies that prove — or disprove — our theory remain inconclusive.” Nonetheless, in the patent literature and in the popular press, vivid anecdotes abound about patent thickets obstructing development of new drugs or preventing the distribution of life-enhancing genetically engineered foods to the developing world.

Given the heightened interest today amongst scholars and lawyers concerning the existence and policy significance of patent thickets, a historical analysis of the sewing machine patent thicket in the 1850s — called the “Sewing Machine War” at the time — and the denouement of this patent thicket in the Sewing Machine Combination of 1856 is important.

On one hand, it is an empirical case study of a patent thicket that (temporarily) prevented the commercial development of an important product of the Industrial Revolution. The sewing machine was the result of numerous incremental and complementary inventive contributions, which led to a morass of patent infringement litigation given overlapping patent claims to the final commercial product. The Sewing Machine War thus confirms that patent thickets exist, and that they can lead to what Professor Heller has identified as the tragedy of the anticommons.  On the other hand, the story of the sewing machine challenges some underlying assumptions in the current discourse about patent thickets.

The posts and the paper are highly recommended material for TOTM readers interested in issues involving IP and antitrust.