Archives For Anne Layne-Farrar

Too much ink has been spilled in an attempt to gin up antitrust controversies regarding efforts by holders of “standard essential patents” (SEPs, patents covering technologies that are adopted as part of technical standards relied upon by manufacturers) to obtain reasonable returns to their property. Antitrust theories typically revolve around claims that SEP owners engage in monopolistic “hold-up” when they threaten injunctions or seek “excessive” royalties (or other “improperly onerous” terms) from potential licensees in patent licensing negotiations, in violation of pledges (sometimes imposed by standard-setting organizations) to license on “fair, reasonable, and non-discriminatory” (FRAND) terms. As Professors Joshua Wright and Douglas Ginsburg, among others, have explained, contract law, tort law, and patent law are far better placed to handle “FRAND-related” SEP disputes than antitrust law. Adding antitrust to the litigation mix generates unnecessary costs and inefficiently devalues legitimate private property rights.

Concerns by antitrust mavens that other areas of law are insufficient to cope adequately with SEP-FRAND disputes are misplaced. A fascinating draft law review article by Koren Wrong-Ervin, Director of the Scalia Law School’s Global Antitrust Institute, and Anne Layne-Farrar, Vice President of Charles River Associates, does an admirable job of summarizing key decisions by U.S. and foreign courts involved in determining FRAND rates in SEP litigation, and in highlighting key economic concepts underlying these holdings. As explained in the article’s abstract:

In the last several years, courts around the world, including in China, the European Union, India, and the United States, have ruled on appropriate methodologies for calculating either a reasonable royalty rate or reasonable royalty damages on standard-essential patents (SEPs) upon which a patent holder has made an assurance to license on fair, reasonable and nondiscriminatory (FRAND) terms. Included in these decisions are determinations about patent holdup, licensee holdout, the seeking of injunctive relief, royalty stacking, the incremental value rule, reliance on comparable licenses, the appropriate revenue base for royalty calculations, and the use of worldwide portfolio licensing. This article provides an economic and comparative analysis of the case law to date, including the landmark 2013 FRAND-royalty determination issued by the Shenzhen Intermediate People’s Court (and affirmed by the Guangdong Province High People’s Court) in Huawei v. InterDigital; numerous U.S. district court decisions; recent seminal decisions from the United States Court of Appeals for the Federal Circuit in Ericsson v. D-Link and CISCO v. CSIRO; the six recent decisions involving Ericsson issued by the Delhi High Court; the European Court of Justice decision in Huawei v. ZTE; and numerous post- Huawei v. ZTE decisions by European Union member states. While this article focuses on court decisions, discussions of the various agency decisions from around the world are also included throughout.   

To whet the reader’s appetite, key economic policy and factual “takeaways” from the article, which are reflected implicitly in a variety of U.S. and foreign judicial holdings, are as follows:

  • Holdup of any form requires lock-in, i.e., standard-implementing companies with asset-specific investments locked in to the technologies defining the standard or SEP holders locked in to licensing in the context of a standard because of standard-specific research and development (R&D) leading to standard-specific patented technologies.
  • Lock-in is a necessary condition for holdup, but it is not sufficient. For holdup in any guise to actually occur, there also must be an exploitative action taken by the relevant party once lock-in has happened. As a result, the mere fact that a license agreement was signed after a patent was included in a standard is not enough to establish that the patent holder is practicing holdup—there must also be evidence that the SEP holder took advantage of the licensee’s lock-in, for example by charging supra-FRAND royalties that it could not otherwise have charged but for the lock-in.
  • Despite coming after a particular standard is published, the vast majority of SEP licenses are concluded in arm’s length, bilateral negotiations with no allegations of holdup or opportunistic behavior. This follows because market mechanisms impose a number of constraints that militate against acting on the opportunity for holdup.
  • In order to support holdup claims, an expert must establish that the terms and conditions in an SEP licensing agreement generate payments that exceed the value conveyed by the patented technology to the licensor that signed the agreement.
  • The threat of seeking injunctive relief, on its own, cannot lead to holdup unless that threat is both credible and actionable. Indeed, the in terrorem effect of filing for an injunction depends on the likelihood of its being granted. Empirical evidence shows a significant decline in the number of injunctions sought as well as in the actual rate of injunctions granted in the United States following the Supreme Court’s 2006 decision in eBay v. MercExchange LLC, which ended the prior nearly automatic granting of injunctions to patentees and instead required courts to apply a traditional four-part equitable test for granting injunctive relief.
  • The Federal Circuit has recognized that an SEP holder’s ability to seek injunctive relief is an important safeguard to help prevent potential licensee holdout, whereby an SEP infringer unilaterally refuses a FRAND royalty or unreasonably delays negotiations to the same effect.
  • Related to the previous point, seeking an injunction against a licensee who is delaying or not negotiating in good faith need not actually result in an injunction. The fact that a court finds a licensee is holding out and/or not engaging in good faith licensing discussions can be enough to spur a license agreement as opposed to a permanent injunction.
  • FRAND rates should reflect the value of the SEPs at issue, so it makes no economic sense to estimate an aggregate rate for a standard by assuming that all SEP holders would charge the same rate as the one being challenged in the current lawsuit.
  • Moreover, as the U.S. Court of Appeals for the Federal Circuit has held, allegations of “royalty stacking” – the allegedly “excessive” aggregate burden of high licensing fees stemming from multiple patents that cover a single product – should be backed by case-specific evidence.
  • Most importantly, when a judicial FRAND assessment is focused on the value that the SEP portfolio at issue has contributed to the standard and products embodying the standard, the resulting rates and terms will necessarily avoid both patent holdup and royalty stacking.

In sum, the Wong-Ervin and Layne-Farrar article highlights economic insights that are reflected in the sounder judicial opinions dealing with the determination of FRAND royalties.  The article points the way toward methodologies that provide SEP holders sufficient returns on their intellectual property to reward innovation and maintain incentives to invest in technologies that enhance the value of standards.  Read it and learn.

TOTM is pleased to welcome guest blogger Nicolas Petit, Professor of Law & Economics at the University of Liege, Belgium.

Nicolas has also recently been named a (non-resident) Senior Scholar at ICLE (joining Joshua Wright, Joanna Shepherd, and Julian Morris).

Nicolas is also (as of March 2017) a Research Professor at the University of South Australia, co-director of the Liege Competition & Innovation Institute and director of the LL.M. program in EU Competition and Intellectual Property Law. He is also a part-time advisor to the Belgian competition authority.

Nicolas is a prolific scholar specializing in competition policy, IP law, and technology regulation. Nicolas Petit is the co-author (with Damien Geradin and Anne Layne-Farrar) of EU Competition Law and Economics (Oxford University Press, 2012) and the author of Droit européen de la concurrence (Domat Montchrestien, 2013), a monograph that was awarded the prize for the best law book of the year at the Constitutional Court in France.

One of his most recent papers, Significant Impediment to Industry Innovation: A Novel Theory of Harm in EU Merger Control?, was recently published as an ICLE Competition Research Program White Paper. His scholarship is available on SSRN and he tweets at @CompetitionProf.

Welcome, Nicolas!

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

A colleague sent along the 2011 Washington & Lee law journal rankings.  As co-editor of the Supreme Court Economic Review (along with Todd Zywicki and Ilya Somin) I was very pleased to notice how well the SCER is faring by these measures.  While these rankings should always be taken with a grain of salt or two, by “Impact Factor” here are the top 3 law journals in the “economics” sub-specialty:

  1. Supreme Court Economic Review (1.46)
  2. Journal of Legal Studies (1.31)
  3. Journal of Empirical Legal Studies (1.2)

SCER comes in third in the “Combined” rankings behind Journal of Empirical Legal Studies and the Journal of Legal Studies.

SCER is a peer-reviewed journal and operates on an exclusive submission basis.  You can take a look at our most recent volume here.  If you have an interesting law & economics piece (hint: it need not be related to a Supreme Court case) you’d like to submit, please consider us.

Submissions can be emailed to: scer@gmu.edu

UPDATE: I should also note that George Mason’s Journal of Law, Economics and Policy also ranks very well by these measures!  It is a student-run journal here at GMU Law and comes in 13th and 16th in the “economics” category by impact factor and combined ranking, respectively.

Speaking of JLEP ….

JLEP will be hosting a great symposium in conjunction with GMU’s Information Economy Project (directed by Tom Hazlett) on Friday: The Digital Inventor: How Entrepreneurs Compete on Platforms.   I have the privilege of moderating one of the panels.  But the lineup of speakers is just terrific.

  • Richard Langlois, University of Connecticut, Department of Economics 
  • Thomas Hazlett, Prof. of Law & Economics, George Mason University
  • Andrei Hagiu, Harvard Business School, Multi-Sided Platforms
  • Salil Mehra, Temple University Beasley School of Law, Platforms and the Choice of Models
  • Donald Rosenberg, Qualcomm, Inc.
  • Anne Layne-Farrar, Compass-Lexecon, The Brothers Grimm Book of Business Models: A Survey of Literature and Developments in Patent Acquisition and Litigation
  • James Bessen, Boston University School of Law, The Private Costs of Patent Litigation
  • David Teece, Haas School of Business, UC Berkeley

Our book, Competition Policy and Patent Law Under Uncertainty: Regulating Innovation will be published by Cambridge University Press in July.  The book’s page on the CUP website is here.

I just looked at the site to check on the publication date and I was delighted to see the advance reviews of the book.  They are pretty incredible, and we’re honored to have such impressive scholars, among the very top in our field and among our most significant influences, saying such nice things about the book:

After a century of exponential growth in innovation, we have reached an era of serious doubts about the sustainability of the trend. Manne and Wright have put together a first-rate collection of essays addressing two of the important policy levers – competition law and patent law – that society can pull to stimulate or retard technological progress. Anyone interested in the future of innovation should read it.

Daniel A. Crane, University of Michigan

Here, in one volume, is a collection of papers by outstanding scholars who offer readers insightful new discussions of a wide variety of patent policy problems and puzzles. If you seek fresh, bright thoughts on these matters, this is your source.

Harold Demsetz, University of California, Los Angeles

This volume is an essential compendium of the best current thinking on a range of intersecting subjects – antitrust and patent law, dynamic versus static competition analysis, incentives for innovation, and the importance of humility in the formulation of policies concerning these subjects, about which all but first principles are uncertain and disputed. The essays originate in two conferences organized by the editors, who attracted the leading scholars in their respective fields to make contributions; the result is that rara avis, a contributed volume more valuable even than the sum of its considerable parts.

Douglas H. Ginsburg, Judge, US Court of Appeals, Washington, DC

Competition Policy and Patent Law under Uncertainty is a splendid collection of essays edited by two top scholars of competition policy and intellectual property. The contributions come from many of the world’s leading experts in patent law, competition policy, and industrial economics. This anthology takes on a broad range of topics in a comprehensive and even-handed way, including the political economy of patents, the patent process, and patent law as a system of property rights. It also includes excellent essays on post-issuance patent practices, the types of practices that might be deemed anticompetitive, the appropriate role of antitrust law, and even network effects and some legal history. This volume is a must-read for every serious scholar of patent and antitrust law. I cannot think of another book that offers this broad and rich a view of its subject.

Herbert Hovenkamp, University of Iowa

With these contributors:

Robert Cooter, Richard A. Epstein, Stan J. Liebowitz, Stephen E. Margolis, Daniel F. Spulber, Marco Iansiti, Greg Richards, David Teece, Joshua D. Wright, Keith N. Hylton, Haizhen Lee, Vincenzo Denicolò, Luigi Alberto Franzoni, Mark Lemley, Douglas G. Lichtman, Michael Meurer, Adam Mossoff, Henry Smith, F. Scott Kieff, Anne Layne-Farrar, Gerard Llobet, Jorge Padilla, Damien Geradin and Bruce H. Kobayashi

I would have said the book was self-recommending.  But I’ll take these recommendations any day.

(NB:  We have consulted with Visa U.S.A. Inc. on a variety of issues; the views expressed herein are our own.)

In our earlier post, we observed that the GAO report on interchange got off on the wrong foot when it concluded that interchange fees were rising.  We infer from the silence which greeted our post that everyone agrees with this criticism.  Indeed, yesterday’s posts and comments appear to agree that the GAO’s report does very little to advance the discussion of interchange or the cost of electronic payment.  But we suspect that greater disagreement lies just around the corner.

A number of posts yesterday promised to address the claim on which the criticism of modern payment systems rests–the extent to which the discount that merchants pay to accept most electronic payment systems in the U.S. imposes a tax on legacy payment instruments such as cash and check  Mark Seecof seized upon this point in his comment on Ron Mann’s post.  According to Mark, the “big problem” with the payment card industry is that discount fees are used to fund rewards programs, and he claims that society as a whole would be better off if the government simply forbid networks from enforcing their honor all cards rules and force networks to negotiate acceptance on a program-by-program, issuer-by-issuer basis.  The claim that increasing transaction costs will produce more efficient outcomes is a curious one.  But we’ll save that issue for a later day.  Rather, with this post we intend to take up the predicate of Mark’s post–that discount fees on electronic payments shift costs to users of legacy payment instruments.

At the outset, we note that discount fees, unlike interchange, are a feature of virtually all private payment instruments.  Thus, if there is something to the notion that discount fees tax other forms of payment, then the criticism applies as much to American Express and Discover as it does to MasterCard and Visa.  In our view, however, although this criticism is oft repeated, repetition obscures a number of problems.

First, cross-subsidies are ubiquitous in any complex economy.  Consumers receive free refills on drinks in restaurants, free parking at shopping malls, goods below cost in supermarkets (via loss leaders), relatively inexpensive newspapers because advertisers pay most of the costs, and many similar benefits.  To bring buyers and sellers together through such intermediaries as newspapers, supermarkets, and credit cards, one side frequently receives inducements to participate.  These inducements help maximize the joint value of the ultimate transaction for the parties.  Rather than an inefficient “subsidy,” these inducements are the lubricant necessary to make the economic machine work at its best.

Second, from a social policy perspective, whether interchange forces legacy buyers to pay more should raise a concern only if legacy is more efficient.  But it’s not.  It is hard to believe, as some people suggest, that credit cards and other electronic payments are more expensive than cash and checks.  In fact, legacy payments have several limitations that create costs for both consumers and merchants.  Cash only works well when the good and payment are exchanged simultaneously.  And the technology of cash does not support the instantaneous decision to give credit to the person buying; rather, the buyer has to arrange credit separately with a financial institution.  In addition, with cash, you have no recourse against fraud, other than bringing suit.  (On the costs of cash, see Daniel D. Garcia Swartz, Robert W. Hahn, and Anne Layne-Farrar, The Move Toward a Cashless Society: A Closer Look at Payment Instrument Economics, Vol. 5, Issue 2, Review of Network Economics 175, 192 (June 2006) (describing cash as “among the most costly payment method[s] for society”)).  Similar transaction costs and risks accompany the use of checks.

By contrast, there are numerous benefits to using credit cards and other electronic payments.  For consumers, these benefits include the extension of credit real-time, the reduction of risk, automated dispute resolution, and better record keeping, as Garcia-Schwartz, Hahn and Layne-Farrar demonstrate.  For merchants, accepting credit cards allows them to make sales on credit at a generally lower cost than operating their own credit program.  And merchants can receive faster and more certain payment from customers using cards than from customers using other means, such as checks.

Third, and most significantly, the rapid growth of online payments within the retail industry is rendering legacy payments obsolete.  The GAO itself seems generally aware of the shift from legacy to electronic payments.  Indeed, it cites the Federal Reserve’s recent estimate that the use of both checks and cash have declined, or at least grown more slowly than credit and debit card use, since the mid-1990s as more consumers switched to electronic forms of payment.  Since 2005, more than half of total retail transactions have used either credit or debit cards.  Large national merchants report that sales made with cash and checks have decreased in recent years, while sales made with credit and debit cards have increased.

But the GAO ignores that the argument in favor of regulating interchange to protect cash and check users, whatever its overall merits, is logically limited to areas where legacy forms of payment can be used.  And those areas are quickly vanishing.  For example, even if a study of the 1980s retail gasoline market were to suggest the existence of cross-subsidies then—a debatable conclusion—that study cannot be read to suggest the existence of such a subsidy among people who shop on-line or who buy their gas at automated fuel dispensers.  According to scholars, we are rapidly moving towards a cashless society where no one uses legacy instruments such as cash or check.  And there is simply no reason to believe that this move is driven by the inherent inefficiency of electronic payment or an implicit tax on users of cash or check.

The GAO’s misplaced concern for users of legacy payments is yet another example of a theoretical objection to unregulated interchange that finds no support in the facts.  According to Garcia-Schwartz, Hahn and Layne-Farrar, “the shift toward a cashless society appears to improve economic welfare,” with consumers the party most likely to benefit.  But, if history repeats itself, we will likely have to continue to endure the interchange debate regardless of the facts.  As the GAO’s report again reminds us, merchants simply want to pay less for the benefits that credit cards provide.  Not surprisingly, they are in this debate for themselves, not their consumers.

First, I want to join the rest of the participants in congratulating Professor Carrier on an excellent and well-written book emerging out of a thoughtful and ambitious project. The project, and the book, are provocative, important contributions to the literature, and usefully synthesize many of the most important debates in both antitrust and intellectual property.

Were this a full book review and not merely a blog post, I would spend more time identifying the many points in the book that I agree with. But it is not. Instead, I will narrow my focus to Professor Carrier’s approach to standard setting activities, and in particular, patent holdup. Chapter 14 is largely devoted to summarizing the state of affairs in antitrust and standard setting. The summary (pages 323-342) is balanced, well-written and recommended reading for anyone interested in getting up to speed on the current policy issues. After summarizing Professor Carrier’s proposal for antitrust analysis of patent holdup (and other business conduct in the standard setting process), I’ll turn to highlighting a few areas where I found myself either disagreeing with his analysis or hoping for a more complete treatment.
In my own view, the two most pressing policy issues with respect to patent holdup are:

1. What is the appropriate role of antitrust in governing patent holdup?

2. If antitrust rules should govern patent holdup, which statute(s) and what type of analysis should apply? In particular, what is the appropriate scope of Section 2 of the Sherman Act and Section 5 of the FTC Act?

While Professor Carrier’s treatment of patent holdup usefully summarizes the debate, and also recommends a policy proposal that I largely agree with, I was left hoping for a bit more in this section of the book in terms of moving the ball forward on these important questions.

Let’s begin with the policy proposal itself. Professor Carrier argues that “given SSOs significant pro-competitive justifications, courts and the antitrust agencies should consider their activity under the Rule of Reason.” Carrier carves out standard setting organization (SSO) members’ joint decisions to fix prices on the final goods sold to consumers as the only conduct deserving of per se treatment. So far I’m on board. It makes economic and legal sense to treat both standard setting activities (with the exception of cartel behavior) and IP rules of SSOs as generally procompetitive and thus falling under the rule of reason. Carrier identifies three potential areas of liability concern under the rule of reason: patent holdup (he cites Dell and Unocal as examples), boycotts, and situations in which SSOs exert buyer power to reduce prices with the effect of reducing the incentive to innovate. Carrier writes that “absent these situations, SSO activity should be upheld under the rule of reason.”

There is much I agree with here. In fact, I find myself in agreement with Professor Carrier about most of what he writes about the limited utility of per se analysis in the standard setting arena. But I will focus on some areas where I suspect that we disagree, though I’m left unsure based solely on what is in the book. Carrier identifies patent holdup involving deception as a cause for concern under a rule of reason analysis. But the treatment is cursory. Carrier writes that “such activity could demonstrate attempted monopolization under Section 2 of the Sherman Act” and notes that a plaintiff making such a claim must demonstrate, amongst other requirements, that “the deception result[ed] in a standard’s adoption or higher royalties.” (page 342).

It is helpful for my purposes to bifurcate the world of patent holdup theories into those involving deception (the stylized facts in Rambus or the allegations in Broadcom v. Qualcomm) from those that do not and merely involve the ex post modification and/or breach of contractual commitments made in good faith in the standard setting process (FTC v. N-Data). Again, with respect to each of these patent holdup theories, there are at least two critically important policy issues:

1. What is the appropriate role of antitrust in governing patent holdup?

2. If antitrust rules should govern patent holdup, which statute(s) and what type of analysis should apply? In particular, what is the appropriate scope of Section 2 of the Sherman Act and Section 5 of the FTC Act?

With respect to the first policy question, Carrier appears to presume that antitrust rules should apply to unilateral conduct in the form of patent holdup involving both deception and breach theories. I may be wrong about the breach theories. While Carrier discusses N-Data briefly, his policy proposal singles out examples such as Dell and Unocal, which involved deception. I was left wanting a more clear exposition of the details of the policy proposal in this section. More fundamentally, the relative merits of state contract law and the patent doctrine of equitable estoppels in the SSO setting as alternatives to antitrust liability are an important topic. Of course, this issue is one of special concern for me since Kobayashi and Wright (Federalism, Substantive Preemption, and Limits on Antitrust) have argued that antitrust rules layered on top of these alternative (and we argue superior) regulatory institutions threaten to chill participation in the SSO process and reduce welfare. But Kobayashi and Wright are not alone in questioning the utility of antitrust liability layered on top of these alternative bodies of state and federal law. For example, Froeb and Ganglomoir present a model in which “the threat of antitrust liability on top of simple contracts shifts bargaining rents from creators to users of intellectual property in an inefficient way.” Other contributors to this literature questioning the role of antitrust liability in “breach” style patent holdup cases such as N-Data include Anne Layne-Farrar. I will not take on the task in this blog post of repeating the various arguments making the case against antitrust liability here. But I believe that Carrier’s standard setting chapter and policy proposals would benefit from addressing them.

Second, assuming that antitrust rules should apply to patent holdup (both deception and breach variants), what should the analysis look like? With respect to the Section 2 analysis in claims involving deception, Professor Carrier appears to endorse the proposition that a demonstration of either actual exclusion (e.g., the deception is the but-for cause of the adoption of the technology) or higher royalties would be sufficient to support such a claim. I’m not sure why the latter is or should be sufficient? As I’ve argued elsewhere, the Supreme Court’s decision in NYNEX applies in the patent holdup setting when (1) the patent holder has market power prior to the deception and (2) the deceptive conduct results in higher royalties but not exclusion of rival technologies. When those conditions are satisfied, NYNEX’s holding (which is consistent with much of the Supreme Court’s general jurisprudence about the monopolist’s freedom to optimal pricing, e.g., Trinko, Linkline) that deceptive or fraudulent conduct that merely results in higher prices but not exclusion cannot be the basis of a Section 2 claim. Along those lines, I’ve argued that the D.C. Circuit’s Rambus decision is best interpreted as calling the Commission to task for failure to meet its burden of demonstrating that the first of these conditions did not apply. In any event, in reading Carrier’s treatment of patent holdup issues I’m left with several questions. For instance, I’m left wondering whether he believes that Section 2 should apply to both the deception and breach variants of patent holdup? If it applies to both, what is the appropriate scope of NYNEX? For example, do plaintiffs in patent holdup claims under Section 2 have the burden of demonstrating that the patent holder did not have monopoly power prior to the deceptive conduct? If not, on what grounds is NYNEX distinguishable? Is it because it was not an SSO case? What is the appropriate rule of reason analysis in a case involving deception in the standard setting process? What about cases like N-Data where the plaintiff does not allege any “bad conduct” at the time the technology is selected by the standard but rather some renegotiation of contract terms at a later time?

Third, no discussion of patent holdup would be complete without a discussion of whether and how Section 5 of the FTC Act should apply to patent holdup theories. Here again, while Carrier discusses N-Data briefly, this question does not receive attention. So the blog symposium seems like a great place to ask questions like the following: Should Section 5 of the FTC Act apply to both the deception-based and the “pure breach” variants of patent holdup? These are some of the most pressing issues relating to antitrust analysis of standard setting. Recently, Chairman Leibowitz singled out N-Data as a paradigmatic example of the appropriate application of Section 5:

One category of potential cases [to which to apply Section 5] involves standard-setting. N-Data, our consent from last spring, is a useful example. Reasonable people can disagree over whether N-Data violated the Sherman Act because it was never clear whether N-Data’s alleged bad conduct actually caused its monopoly power. However, it was clear to the majority of the Commission that reneging on a commitment was not acceptable business behavior and that—at least in this context—it would harm American consumers. It does not require a complex analysis to see that such behavior could seriously undermine standard-setting, which is generally procompetitive, and dangerously limit the benefits that consumers now get from the wide adoption of industry standards for new technologies.

Tales from the Crypt” Episode ’08 and ’09: The Return of Section 5 (“Unfair Methods of Competition in Commerce are Hereby Declared Unlawful”).
Similarly, Commissioners Leibowitz, Rosch, and Harbour noted in the N-Data majority statement that “there is little doubt that N-Data’s conduct constitutes an unfair method of competition,” describing the renegotiation of the ex ante contractual commitment to license at $1,000 to a RAND commitment as “oppressive” and an act that threatens to “stall [the standard setting process] to the detriment of all consumers.”

I wonder whether Professor Carrier thinks the majority in N-Data was correct? And if so, on what basis?  Or are breach variant holdup claims more appropriately governed under Section 2? If the answer to either of those questions is yes, I’d like to know whether and on what basis the application of these mandatory antitrust rules is superior to contract law, which contains doctrine designed to identify and distinguish good faith modifications and renegotiations from attempts at ex post opportunism.

I should note that I do not consider it a criticism of the book that these details are largely left out of the book. The task of organizing a coherent and intellectually provocative book that moves between copyright, patent, and antitrust is monumental and comes with its own special set of breadth and depth tradeoffs. However, I ultimately found the attention to legal, economic, and policy details in the SSO section less satisfying than the treatment of other equally complex issues throughout the book. While I was left disappointed that these details were not there, I admit to being very curious after reading Professor Carrier’s views on innovation and antitrust more generally as to how he will manage the thorny details in the patent holdup context.