Archives For Innovation for the 21st Century Symposium

 First of all, I would like to express my deepest gratitude to Josh Wright. Only because of Josh’s creativity and tireless, flawless execution did this blog symposium come about and run so smoothly. I also would like to thank Dennis Crouch, who has generously cross-posted the symposium at PatentlyO. And I am grateful for the attention of the communities at TOTM and PatentlyO, which have patiently scrolled through countless pages and posts to learn about my book.

Finally, I would like to thank Dan Crane, Dennis Crouch, Brett Frischmann, Scott Kieff, Geoff Manne, Phil Weiser, and Josh Wright for their insightful and incisive comments. Though they each had busy schedules, they managed to squeeze in a look at some or all of a book that is not the shortest ever written. And wasting no time, they focused like a laser on the book’s most ambitious proposals, as well as its omissions. If I didn’t know better, I would think that the commentators divided the market of my book to minimize overlap in treatment. I do know better, though, enough to know that the breadth of critiques and lack of overlap reflect Josh’s skill in putting together such a diverse and impressive group of commentators.

Without further ado, let me address the comments by substantive area, starting with antitrust law, proceeding through patent and copyright law, and concluding with the most general critiques.

Continue Reading…

This post is from F. Scott Kieff (Wash U./ Hoover)

I, too, join the rest of the participants in congratulating Michael Carrier on this great book about this great topic.  I have enjoyed reading Michael’s work in the past and I enjoyed meeting him at a conference last year.  He is a wonderfully warm, bright, and engaging person.  Although I wish that I had more of an opportunity to fully read his impressive text before the date of this on-line symposium, I am grateful for the opportunity to read a great deal of the book and to at least skim the remainder.  The wonderful conference that Damien Geradin and his colleagues hosted on these same issues in Amsterdam these past few days was a pleasant distraction.  (For Damien’s conference click here).

Because I share everyone’s support for Michael and his new book, as already detailed by others, I will focus my contribution here on some ways in which the book might have been able to achieve a greater impact.  Recognizing that every project could be improved in some ways, and that ultimately the author must make the difficult choices between completeness and clarity, about his own voice and message, etc, I offer my comments on the chance that those who read Michael’s great book might wonder whether there happens to have been or remain different approaches to the ideas he explores.

As it turns out, the interface between patents and antitrust was one of the two central motivators behind the present US patent statutes, which were codified as the 1952 Patent Act.  In fact, one of the two principle drafters of the ’52 Act, Giles Rich, wrote a series of five articles in the 1940’s that bear a title not unlikely to show up in a computer search on this topic.  (The other principle drafter who also wrote a great deal about the statute was Pat Federico).  And while the 1940’s were indeed a long time ago, because Giles Rich went on to be the longest sitting federal judge, the world’s most famous patent scholar and jurist, the widely recognized father of the modern American Patent System, and a judge on the court that hears most patent appeals, these papers were conveniently republished in a 2004-2005 volume of the Federal Circuit Bar Journal.  The citation is:  Giles S. Rich, “The Relation Between Patent Practices and the Anti-Monopoly Laws,” 14 Fed. Circuit B.J., at pages 5, 21, 37, 67, and 87 (2004-2005).  (The other articles by Judge Rich that are republished in that volume also are instructive on the points explored in Michael’s book).

Judge Rich explored an approach that is focused on predictable validity and enforcement rules rather than the more flexible approaches advocated by Michael (and many others).  Rich was not alone.  His approach was followed in the writings of a diverse group of leading commercial jurists at the time like Learned Hand and Jerome Frank.  (It is worth noting for reasons explored below that if using modern political labels, Judge Frank would be seen as a liberal populist).

The judiciary was not the only branch of government to follow Rich’s view.  Rich provided extensive explicit testimony before Congress about the goals of the ’52 Act in re-aligning the interface between patents and antitrust and in creating an objective standard for determining patent validity.   Congress agreed with the approach he offered in his testimony when it voted for the statute.  The Supreme Court in turn expressly and extensively relied on that legislative history, and especially Judge Rich’s testimony, in the well-known Dawson decision on patents and antitrust in 1980.  That approach was also affirmed by the current Supreme Court in the ITW v. III case. 

As Judge Pauline Newman has reminded on several occasions in law review articles and speeches, we can fast forward to the late 1970’s, when the economy was in difficult times, like it was in the 1940’s and is today, to see that a very diverse pair of US Presidents decided to also adopt an approach to patents like that urged by Rich, Federico, Hand, Frank, and others.  President Carter decided, after a careful study, to put forth a statute designed to strengthen the patent system by creating the Federal Circuit, and President Reagan signed the bill after Congress passed it.

For the past several years, there has been a number of academics writing about this approach to patents – an approach that might be seen as focused on the theory of property more generally (as compared with just intellectual property).  The group includes Richard Epstein, Steve Haber, Troy Paredes (now on leave from this academic work), Henry Smith, Joseph Straus, David Teece, Polk Wagner, Josh Wright, and me (these folks listed so far have worked together on a range of recent works arising out of the Hoover Project on Commercializing Innovation), as well as Michael Abramowicz, John Duffy, and Adam Mossoff.  While a recent posting on Patently-O labels one of these folks listed here (me) as “conservative,” it is not clear what is meant by that term.  If the term is given its normal modern political meaning then it is curious to note that Charles Burson, Al Gore’s former Chief of Staff, co-authored one the recent opinion pieces I helped put together on patent reform, since it is not clear that he would fit that definition of the term.  Then again, this is an approach also advocated by President Carter and Jerome Frank, who also don’t easily fit the modern political use of the term conservative.  Put differently, the issues don’t break down nicely along mainstream political lines.  Nor do most people for that matter.  Nor do folks break down along lines of being pro patent or anti-patent.  These issues are more complex.  And so is any good academic. 

The most direct reason why it makes sense to go though all of this intellectual history, naming all of these names of folks who have written about the topics Michael explores (but in a different way than he does), is that Michael’s book does not seriously address any of them or their work.  Indeed, Michael has confirmed that his book doesn’t cite to or even mention most of these names or their work.  And the few times when he does mention some of them, it is in a very minor way, for propositions that are uncontroversial and different from the potential areas of debate they would have with him.  Two notable exceptions, which I appreciate, are Joseph Straus and me.   Michael mentions me once in a short catalog of different approaches to patent theory.  And while he does mention one or two of Joseph’s pieces that have discussed a lack of evidence of a patent hold up problem in the European biotechnology setting, and Michael seems to conclude in that section of his book that patents have posed less of a problem for basic science than some might have feared, he still concludes that “A few high-profile lawsuits against researchers would knock out the scaffolding currently supporting this precarious state of affairs.”  What is so precarious about this state of affairs and why would a few lawsuits disrupt it?  A few airlines crash once in a while and yet the airline sector still does business and people who elect for safety reasons to drive over taking commercial flights are generally not seen as acting in a sufficiently rational way to drive prudent policy on the issue.  Rather than trying to sit as a seemingly neutral judge weighing the empirical evidence Michael elects to discuss in this part of the book, a reader might want to know more about the reasons why patent hold up in this area is not a big issue (and why an “experimental” or “fair” use exception may be) and the book would have made a greater impact in this area if it had addressed more of that work. 

The bottom line is that while Michael has good reasons for not engaging the body of work discussed here, readers might like to at least know about the work, as well as the history, so that they can make up their own minds about these issues after due consideration of the range of views.  For those who are interested, much of it is available for free download on the web at www.innovation.hoover.org. 

A more indirect reason why it matters to consider these other views is that many of them apply a form of comparative institutional analysis generally associated with the field of New Institutional Economics.  In addition to taking seriously the transaction cost problems of property rights that underlie a big part of Michael’s analysis, this comparative approach also takes seriously the political economy problems that underlie how government actors will apply different decision-making rules.  Application of this comparative analytical framework highlights some of the complexities of the more flexible approaches Michael recommends in his book. 

For example, when it comes to dealing with the problem of bad patents (and there are many such patents – ones that don’t really meet the requirements for validity but have nonetheless been issued by the PTO), Michael endorses the currently-popular proposals for more flexible approaches to weeding out.  These proposals generally go by several names including “second window,” “opposition,” “reexamination,” etc.  In his words:

“An added bonus of the proposal would be its effect on antitrust. By providing a low-cost avenue to remove invalid patents, it would reduce the incidence of market power”

But as economists love to say, there is no such thing as a free lunch.  Faster and less financially expensive proceedings for policing bad patents are not without their costs.  The way they go faster and burn fewer dollars per hour in attorney time is that they allow an official actor, whether in the PTO or the courts, the flexibility and discretion to deny patents based on a subjective report about what was within the skill of those in the prior art, rather than the objective and more-fact-based inquiry into the contents and existence of actual laboratory notebooks, printed publications, and sample products which has been the rule since the 1952 Act. 

Flexibility sounds cool – who wants to be rigid? – but it has a significant Achilles heel.  Giving courts and examiners a pass from having to get the hard evidence that used to be required to prove invalidity over the prior art does not come without serious cost. Asking a decision maker to use her legal or technical expertise as the primary basis for her decision about what she thinks the state of the art was at a particular time in history gives her greater discretion than asking an ordinary jury whether a particular document or sample product existed at a particular time and what that document actually contains. By increasing the discretion of government bureaucrats, flexibility increases uncertainty, not decreases it, and it gives a built-in advantage to large companies with hefty lobbying and litigation budgets. That may be a big reason why some big firms want it, but what’s good for some big businesses is not always good for business overall.

Indeed, while much is made about the uncertainty of patents – it’s all the rage today – one of the central problems with many of the legal changes that Michael proposes is that these changes inject into the patent system a much greater uncertainty, and an uncertainty of a much more pernicious type.  Business can deal well with factual uncertainty – in fact many forms of business thrive on it (think options, futures, insurance, etc) – but the one type of uncertainty that is particularly bad for business overall is the uncertainty caused by having the underlying legal rules of the game enforced as a function of fashion and politics. But this is what you get when the enforcement mechanism (the details of the particular framework of the legal institutional design) are matters of flexible discretion. 

And to take things back to where they started, we have already run this experiment in this country.  The relevant legal framework for adjudicating patentability before the 1952 Act was that courts were asked to determine whether a patented invention constituted an “invention.”  A bit of a tautology.  And very flexible. 

The drafters of the 1952 Act did not think that the words “obviousness” and “nonobviousness” were any clearer, on their face.  But they picked these words precisely because they wanted to jettison the interpretive baggage associated with the old legal framework and create a new body of case law that focused on more objective factors. 

History can sometimes offer us some good ideas; and while we often like to emphasize the importance of invention, our efforts to re-invent our legal thinking in this area without the benefit of that historical wisdom may not play out so well. 

This post is from Brett Frischmann (Loyola/ Cornell (Visiting))

I enjoyed reading Mike’s book very much. It provides an excellent primer on antitrust, IP, and innovation.  He synthesizes the legal and economic foundations, contours, and controversies in an accessible fashion. I applaud him for doing this because frankly, it is tough to do given that the fields are quite technical and specialized.  The book really is appropriate for a general audience.  That said, I agree with some of the previous commentators that at times Mike oversimplifies some of the very heated debates he summarizes; given the breadth and complexity of issues, I cannot imagine how he wouldn’t do so.  Still, I think readers should recognize that the debates Mike wades into are incredibly contentious and considerably nuanced.  Nonetheless, the primer is excellent, and the rest of the book is quite provocative.  Ambitiously, Mike makes 10 specific proposals to improve the copyright, patent, and antitrust laws.  I’ll focus my comments on his discussion of dual use technologies and the Sony rule.

In his first chapter focused on proposals for copyright law, Mike discusses dual use technologies (e.g., telephone, cameras, radio, photocopier, VCR, computer, Internet, P2P file-sharing software, etc.).  He explains that dual use technologies are often a form of disruptive innovation that creates new markets, opportunities and even more innovation along with new risks to copyright owners’ rights to control reproduction and distribution.  Copyright law has struggled with technological change in general and dual use technologies in particular throughout its history.  Mike explains how secondary liability theories of contributory liability and vicarious liability can be employed by copyright owners to hold dual use technology vendors accountable for copyright infringement of technology users, and he also explains how the US Supreme Court’s decision in Sony Corporation of America v. Universal City Studios (“Sony”) erected a doctrinal shield to protect technology companies from contributory liability claims where their technologies are “widely used for legitimate, unobjectionable purposes, or even if merely capable of substantial noninfringing uses.”  He then talks about P2P file sharing technologies, P2P litigation, and different interpretations of and challenges to the Sony rule.

The basic issue:  What should be the secondary liability regime for dual-use technologies such as P2P file sharing?   Mike’s proposal is essentially to preserve the Sony shield in its broad form.  He prefers the more protective version—where the defendant is off the hook if the technology is merely capable of substantial noninfringing uses—to the less protective version— where the defendant is off the hook if the technology is widely used for legitimate, unobjectionable purposes.  He offers a few reasons:  most important, the broad, bright line rule is easier to apply by courts and practitioners than other rules that take into account the primacy of certain uses, subjective intent of technology providers, and potential technological design options (e.g., whether a technology provider employed adequate technological precautions to limit infringement).

Generally, I am sympathetic to his approach (I wrote a brief essay on the topic here.)  My concern with his analysis is that he did not engage arguments made by Doug Lichtman and others concerning the potential benefits of crafting a rule that forces technology providers to implement cheap, easy technological fixes to deter or disable infringement or perhaps better enable copyright owners to detect infringers.  Mike touches on the arguments lightly in his discussion of Judge Posner’s decision in Aimster, but I would have liked to see a bit more.  The argument for a more nuanced rule that places some responsibility on technology providers is stronger in the context of dual use technologies that enable widespread copying and distribution—e.g., P2P file sharing technologies; the threat to copyright owners is arguably much greater and technological precautions implemented by technology providers may (or may not) be relatively cheap.

I enjoyed Mike’s discussion of asymmetries—innovation asymmetry, error-costs asymmetry, and litigation asymmetry.  He claims that innovation asymmetry occurs in dual use cases because courts tend to “systematically overemphasize the infringing uses and underappreciate the noninfringing uses.”  (p.128)  The reasons for this asymmetry are that the former are more readily observed and quantified while the latter are “less tangible, less obvious at the onset of a technology, and not advanced by an army of motivated advocates.”  (p.129)  The noninfringing uses are difficult to quantify and value.  As Mike puts it, “how do we put a dollar figure on the benefits of enhanced communication and interaction?”  Moreover, the noninfringing uses tend to develop over time in ways that are difficult to predict upfront. 

I agree with Mike’s observations about the innovation asymmetry and think that he is correct to emphasize how it leads to a systematic bias in how courts (and commentators) evaluate technologies and develop the rules to regulate technologies.  Of course, the asymmetry is not unique to the creativity-innovation dichotomy; it also exists when courts analyze different uses of a work protected by copyright.  That is, the quantification and valuation problems are more general that dual use technologies.  (I must note that I am still working my way through part of the book, and frankly, I hope to see this argument made elsewhere because I think the innovation asymmetry he highlights is pervasive.)  In fact, as I have argued elsewhere; see also here and here, this type of asymmetry provides a relatively strong argument in favor of the broad version of the Sony rule for some types of general purpose—multi-use—technologies.

As for error-costs asymmetry, his discussion is very brief, and I have my doubts about the utility of the error cost framework, because it does not seem to account for accuracy benefits very well (i.e., the framework focuses on the costs of false positive and false negatives but does not deal directly with the benefits of positive positives and positive negatives – some preliminary thoughts on this are here). Moreover, I think Mike is too quick to say that in the case of a type II error (where a court mistakenly upholds a technology), “Congress can always step in to compensate copyright holders”— (really, Congress can? It is that easy?), and in the case of a type I error (where a court mistakenly holds a technology provider liable), a technology will be abandoned and “consumers will never know what they are missing”—is this always the case? The claims seem a bit strong.

Mike’s discussion of litigation asymmetry is also important. He notes how the high litigation costs alone can stifle technological innovation and create substantial holdup problems. He gives a number of high profile examples to make his point.  Mike claims that the litigation asymmetry, along with the other asymmetries, “exert[s] a strong, though often hidden, pull in the evaluation of infringing and noninfringing uses.”  This is one of those claims that are quite difficult to prove empirically, but nonetheless, at least in my view, ring true.  In the end, Mike recommends a return to Sony, at least for P2P file sharing technologies.  While one might say it is too late to return to Sony, given Napster, Aimster, and especially Grokster, I think it is fair to say that Sony is alive and well, perhaps even in its broad form. 

In closing, I will note that Mike’s discussion of statutory damages in chapter 7 is probably the least controversial in the book.  Maybe I am wrong, but I suspect that most people would agree with Mike that “applying statutory damages to secondary infringers has startling, unjustifiable consequences [described well by Mike], which are not needed to carry out Congress’s purposes and which pose great peril for innovation.”  (p. 160)

 This post is from Dennis Crouch(Missouri/PatentlyO)

I am enjoying Professor Carrier’s new book Innovation in the 21st Century: Harnessing the Power of Intellectual Property and Antitrust Law. I will focus my discussion here on patent issues discussed in Part III of the book.

As other commentaries have noted the book is long on conclusions and proposals but somewhat short on justifications for the conclusions. In the words of Geoff Manne: “with what seems to me to be little support (and with only essentially-anecdotal empirical support), Carrier then chooses sides.” On the patent side, Carrier rather consistently chooses sides in favor of weaker patents.

Thank you Supreme Court: Like many academics, Carrier knows that patent law circa 2006 was in a bad-state. The problems stem from the Federal Circuit and its “formalistic rules”; from “patent trolls [who] do not manufacture products and thus do not face patent infringement counterclaims, emboldening them to file lawsuits”; and from the PTO and its insufficient resources.  The pendulum had swung too far in favor of the patent applicant and litigious patent holder. In Carrier’s history, the Supreme Court at least partially saved the day by weakening patent rights in eBay (no injunctive relief), KSR (easing obviousness rules), and MedImmune (greater access to declaratory judgment actions). Seeing the light, the Federal Circuit also rolled-back the scourge of treble damages for willful infringement in a way that “promises to promote disclosure and innovation.” Because of the Supreme Court’s action, many of the proposals needed in 2006 “are no longer needed.” From an antitrust harm perspective, eBay and MedImmune are theoretically important because they help prevent potential hold-ups. We are left without any answer, however, as to whether it is worth the added litigation expense and reduced patent incentive in order to shadow box with these mythical holdups. It is interesting that the best example that Carrier provides is the NTP Blackberry case which RIM eventually settled for $600+ million. In that case, RIM had taken on the risk of a large settlement by declining early opportunities to settle. In addition, because of the competitive nature of the wireless market, there is no indication that the settlement raised prices or limited access in any way.

On KSR, my reading is that Carrier sees this case as benefiting patent quality – at least the likelihood that issued patents are valid. Later, Carrier links elimination of invalid patents with a pro-competitive benefit. (p.229). What I don’t understand is if Carrier’s argument is special to invalid patents – or is he simply saying that the marketplace would be more competitive without patent rights?

Post-Grant Opposition: Chapter 9 is devoted to a new post-grant opposition layered over the reexamination and interference procedures. Carrier’s proposal is a close parallel to the proposals in the Patent Reform Act of 2009, and I agree with his rejection of current alternatives. (1) It would be prohibitively expensive (and I would argue detrimental to innovation) to ensure that only valid patents issue on the first pass through the PTO; (2) challenging patents during litigation is expensive and financially risky; and (3) current reexamination proceedings are too limited in scope and procedure (and I would argue too slow).

I have a small problem with Carrier’s explanation of the benefits of his proposed system. He first indicates that stronger post grant review will lower prices because competitors will less often need to spend money to design around a would-be invalid patent. Then, in the next breath, Carrier promises spillover technology benefits derived from money spent on reviewing competitors patents for opposition. Of course, these two arguments are on the same coin. If money spent designing around is wasteful so is money spent reviewing the validity of patents. Likewise, if reviewing competitor’s patents leads to additional innovation, so will time spent designing around.

Carrier also notes the “antitrust benefit” that invalidated patents will no longer create any market power problems. Glaringly absent from the discussion is how the opposition proceedings would impact the innovation incentive – especially under the PTO’s current mantra favoring rejection.

Material Transfer Agreements: Carrier includes Chapter 12 on MTA’s in the patent section as well. It is an important topic, although it is unclear why it fits in patents. The closest link is that many material transfer agreements include restrictions on public disclosure and a declaration of ownership of any future patent rights. MTA’s are generally negotiated. A researcher typically wants access to some materials such as a stem cell line, seed-line, or tissue. The owner of those physical item ordinarily demands some consideration from the researcher as inducement for sharing.

Carrier’s problems with the current MTA approach appears three-fold. First, some researchers are unwilling to pay the consideration and thus cannot access the materials. Second, the negotiation has high transaction costs – including delay. And, third, the public loses when the researchers are restricted or delayed from publishing. His solution: require all agencies receiving federal funding to agree to a standard universal MTA (the UBMTA). The proposal is nice, but we really don’t know its impact. Parties that care about non-standard terms would still do side-deals — adding more complexity than before the rule. Alternatively, those parties may simply walk away because the terms are not acceptable — further limiting access to the materials.

Pricing: Finally, I have a word to say about Oxford Press. The books are great, but they are entirely too expensive. List price for this book is $65 while the Bessen Meurer book by Princeton University Press was only $30. Authors, when you negotiate you book deal, work to make sure the book is affordable.

First, I want to join the rest of the participants in congratulating Professor Carrier on an excellent and well-written book emerging out of a thoughtful and ambitious project. The project, and the book, are provocative, important contributions to the literature, and usefully synthesize many of the most important debates in both antitrust and intellectual property.

Were this a full book review and not merely a blog post, I would spend more time identifying the many points in the book that I agree with. But it is not. Instead, I will narrow my focus to Professor Carrier’s approach to standard setting activities, and in particular, patent holdup. Chapter 14 is largely devoted to summarizing the state of affairs in antitrust and standard setting. The summary (pages 323-342) is balanced, well-written and recommended reading for anyone interested in getting up to speed on the current policy issues. After summarizing Professor Carrier’s proposal for antitrust analysis of patent holdup (and other business conduct in the standard setting process), I’ll turn to highlighting a few areas where I found myself either disagreeing with his analysis or hoping for a more complete treatment.
In my own view, the two most pressing policy issues with respect to patent holdup are:

1. What is the appropriate role of antitrust in governing patent holdup?

2. If antitrust rules should govern patent holdup, which statute(s) and what type of analysis should apply? In particular, what is the appropriate scope of Section 2 of the Sherman Act and Section 5 of the FTC Act?

While Professor Carrier’s treatment of patent holdup usefully summarizes the debate, and also recommends a policy proposal that I largely agree with, I was left hoping for a bit more in this section of the book in terms of moving the ball forward on these important questions.

Let’s begin with the policy proposal itself. Professor Carrier argues that “given SSOs significant pro-competitive justifications, courts and the antitrust agencies should consider their activity under the Rule of Reason.” Carrier carves out standard setting organization (SSO) members’ joint decisions to fix prices on the final goods sold to consumers as the only conduct deserving of per se treatment. So far I’m on board. It makes economic and legal sense to treat both standard setting activities (with the exception of cartel behavior) and IP rules of SSOs as generally procompetitive and thus falling under the rule of reason. Carrier identifies three potential areas of liability concern under the rule of reason: patent holdup (he cites Dell and Unocal as examples), boycotts, and situations in which SSOs exert buyer power to reduce prices with the effect of reducing the incentive to innovate. Carrier writes that “absent these situations, SSO activity should be upheld under the rule of reason.”

There is much I agree with here. In fact, I find myself in agreement with Professor Carrier about most of what he writes about the limited utility of per se analysis in the standard setting arena. But I will focus on some areas where I suspect that we disagree, though I’m left unsure based solely on what is in the book. Carrier identifies patent holdup involving deception as a cause for concern under a rule of reason analysis. But the treatment is cursory. Carrier writes that “such activity could demonstrate attempted monopolization under Section 2 of the Sherman Act” and notes that a plaintiff making such a claim must demonstrate, amongst other requirements, that “the deception result[ed] in a standard’s adoption or higher royalties.” (page 342).

It is helpful for my purposes to bifurcate the world of patent holdup theories into those involving deception (the stylized facts in Rambus or the allegations in Broadcom v. Qualcomm) from those that do not and merely involve the ex post modification and/or breach of contractual commitments made in good faith in the standard setting process (FTC v. N-Data). Again, with respect to each of these patent holdup theories, there are at least two critically important policy issues:

1. What is the appropriate role of antitrust in governing patent holdup?

2. If antitrust rules should govern patent holdup, which statute(s) and what type of analysis should apply? In particular, what is the appropriate scope of Section 2 of the Sherman Act and Section 5 of the FTC Act?

With respect to the first policy question, Carrier appears to presume that antitrust rules should apply to unilateral conduct in the form of patent holdup involving both deception and breach theories. I may be wrong about the breach theories. While Carrier discusses N-Data briefly, his policy proposal singles out examples such as Dell and Unocal, which involved deception. I was left wanting a more clear exposition of the details of the policy proposal in this section. More fundamentally, the relative merits of state contract law and the patent doctrine of equitable estoppels in the SSO setting as alternatives to antitrust liability are an important topic. Of course, this issue is one of special concern for me since Kobayashi and Wright (Federalism, Substantive Preemption, and Limits on Antitrust) have argued that antitrust rules layered on top of these alternative (and we argue superior) regulatory institutions threaten to chill participation in the SSO process and reduce welfare. But Kobayashi and Wright are not alone in questioning the utility of antitrust liability layered on top of these alternative bodies of state and federal law. For example, Froeb and Ganglomoir present a model in which “the threat of antitrust liability on top of simple contracts shifts bargaining rents from creators to users of intellectual property in an inefficient way.” Other contributors to this literature questioning the role of antitrust liability in “breach” style patent holdup cases such as N-Data include Anne Layne-Farrar. I will not take on the task in this blog post of repeating the various arguments making the case against antitrust liability here. But I believe that Carrier’s standard setting chapter and policy proposals would benefit from addressing them.

Second, assuming that antitrust rules should apply to patent holdup (both deception and breach variants), what should the analysis look like? With respect to the Section 2 analysis in claims involving deception, Professor Carrier appears to endorse the proposition that a demonstration of either actual exclusion (e.g., the deception is the but-for cause of the adoption of the technology) or higher royalties would be sufficient to support such a claim. I’m not sure why the latter is or should be sufficient? As I’ve argued elsewhere, the Supreme Court’s decision in NYNEX applies in the patent holdup setting when (1) the patent holder has market power prior to the deception and (2) the deceptive conduct results in higher royalties but not exclusion of rival technologies. When those conditions are satisfied, NYNEX’s holding (which is consistent with much of the Supreme Court’s general jurisprudence about the monopolist’s freedom to optimal pricing, e.g., Trinko, Linkline) that deceptive or fraudulent conduct that merely results in higher prices but not exclusion cannot be the basis of a Section 2 claim. Along those lines, I’ve argued that the D.C. Circuit’s Rambus decision is best interpreted as calling the Commission to task for failure to meet its burden of demonstrating that the first of these conditions did not apply. In any event, in reading Carrier’s treatment of patent holdup issues I’m left with several questions. For instance, I’m left wondering whether he believes that Section 2 should apply to both the deception and breach variants of patent holdup? If it applies to both, what is the appropriate scope of NYNEX? For example, do plaintiffs in patent holdup claims under Section 2 have the burden of demonstrating that the patent holder did not have monopoly power prior to the deceptive conduct? If not, on what grounds is NYNEX distinguishable? Is it because it was not an SSO case? What is the appropriate rule of reason analysis in a case involving deception in the standard setting process? What about cases like N-Data where the plaintiff does not allege any “bad conduct” at the time the technology is selected by the standard but rather some renegotiation of contract terms at a later time?

Third, no discussion of patent holdup would be complete without a discussion of whether and how Section 5 of the FTC Act should apply to patent holdup theories. Here again, while Carrier discusses N-Data briefly, this question does not receive attention. So the blog symposium seems like a great place to ask questions like the following: Should Section 5 of the FTC Act apply to both the deception-based and the “pure breach” variants of patent holdup? These are some of the most pressing issues relating to antitrust analysis of standard setting. Recently, Chairman Leibowitz singled out N-Data as a paradigmatic example of the appropriate application of Section 5:

One category of potential cases [to which to apply Section 5] involves standard-setting. N-Data, our consent from last spring, is a useful example. Reasonable people can disagree over whether N-Data violated the Sherman Act because it was never clear whether N-Data’s alleged bad conduct actually caused its monopoly power. However, it was clear to the majority of the Commission that reneging on a commitment was not acceptable business behavior and that—at least in this context—it would harm American consumers. It does not require a complex analysis to see that such behavior could seriously undermine standard-setting, which is generally procompetitive, and dangerously limit the benefits that consumers now get from the wide adoption of industry standards for new technologies.

Tales from the Crypt” Episode ’08 and ’09: The Return of Section 5 (“Unfair Methods of Competition in Commerce are Hereby Declared Unlawful”).
Similarly, Commissioners Leibowitz, Rosch, and Harbour noted in the N-Data majority statement that “there is little doubt that N-Data’s conduct constitutes an unfair method of competition,” describing the renegotiation of the ex ante contractual commitment to license at $1,000 to a RAND commitment as “oppressive” and an act that threatens to “stall [the standard setting process] to the detriment of all consumers.”

I wonder whether Professor Carrier thinks the majority in N-Data was correct? And if so, on what basis?  Or are breach variant holdup claims more appropriately governed under Section 2? If the answer to either of those questions is yes, I’d like to know whether and on what basis the application of these mandatory antitrust rules is superior to contract law, which contains doctrine designed to identify and distinguish good faith modifications and renegotiations from attempts at ex post opportunism.

I should note that I do not consider it a criticism of the book that these details are largely left out of the book. The task of organizing a coherent and intellectually provocative book that moves between copyright, patent, and antitrust is monumental and comes with its own special set of breadth and depth tradeoffs. However, I ultimately found the attention to legal, economic, and policy details in the SSO section less satisfying than the treatment of other equally complex issues throughout the book. While I was left disappointed that these details were not there, I admit to being very curious after reading Professor Carrier’s views on innovation and antitrust more generally as to how he will manage the thorny details in the patent holdup context.

Michael Carrier has written a timely and interesting book.  Like Dan, I’m still digesting it (which means, in translation: I have not yet read every word).  There is much to like about the book, in particular its accessible format and content.  I do fear that it is a bit overly ambitious, however, hoping both to educate the completely un-initiated as well as to develop a more advanced agenda, and at times it reads like two separate books.

I suppose related to this criticism are my more detailed comments, which perhaps distill down to this: The book repeatedly and appropriately canvasses both sides of some pretty heated debates, nicely presenting the most basic arguments, and suggesting if not saying that these are matters about which we are profoundly uncertain.  Nevertheless, with what seems to me to be little support (and with only essentially-anecdotal empirical support), Carrier then chooses sides. 

For example, as I discuss a bit more below, the concept of the innovation market is contentious and unsettled.  Carrier presents truncated versions of both sides of this debate and then summarily votes in favor of innovation markets, slyly offering to confine the analysis to pharmaceutical industry mergers, but nevertheless offering a “framework for innovation-market analysis.”  Frankly, the framework strikes me as little more than a stylized merger analysis under the Guidelines, with a “Schumpeterian Defense” thrown in for good measure (but extremely limited, and essentially the same as the traditional failing firm defense).  I see little here to suggest that the innovation market analysis, even as styled by Carrier, will do much effectively to incorporate dynamic efficiency concerns into antitrust.  And there are other examples.  I would have preferred to see a book that went into far greater depth in defending these sorts of choices among uncertain alternatives.

In more detail:  First off, the arc of the introductory section on antitrust is familiar: A swinging pendulum from under- to over-enforcement (and back again) describes the history of antitrust, and the optimal is somewhere in the middle.  But Bill Kovacic has masterfully decimated this argument before (although that hasn’t stopped it persisting: to wit Commissioner Rosch’s 2007 speech on US and EU antitrust enforcement asking again if the pendulum has swung too far.)  As Kovacic says

It is bad enough that the narrative distorts actual enforcement experience to accentuate the pendulum’s movements. Worse, by obscuring the actual path of policy, the pendulum narrative impedes our understanding of how federal antitrust enforcement has developed and of what antitrust agencies must do to improve the quality of competition policy in the future.

Kovacic is surely correct that a more nuanced analysis of US antitrust history identifies far less of a pendulum and rather a consistent evolution.

Carrier’s book bolsters the historical narrative with the traditional theoretical one:  Crandall and Winston versus Baker: Antitrust enforcement is costly and harmful versus “no it’s not.” (Actually Baker’s argument is more complex than that, but that’s the basic idea).  But, alas, Baker relies primarily on four lax experiments as described on p. 67 of Carrier’s book to support the contention that . . . less enforcement leads to more cartels.  Well, sure.  Where enforcement is more lax, you get more cartels.  But nothing in the examples supports the notion that less enforcement leads to more monopolization, and nothing supports the notion that less enforcement against monopolies is harmful to society.  (The examples aren’t really great at supporting the notion that lax enforcement of more nuanced forms of concerted action than cartels harms consumers, either).  But this book is largely about unilateral conduct (and to a lesser extent mergers), not cartels, so it’s not at all clear to me that Baker’s work refutes the relevant portions of Crandall & Winston for present purposes.

Moreover, it has to be said that the actual evidence on mergers is really mixed, as a recent NERA study in Antitrust makes clear.

This all may seem like a quibble about an introductory point, but it’s much more than that.  I can’t help but notice that everyone who adopts the pendulum narrative does so to make the point that today’s antitrust enforcement is too lax and should be beefed up—history demands it.  This book is no exception.  But, of course, starting from the point of view that more antitrust is good for innovation, it is not surprising that Carrier finds this to be true throughout the book.  Meanwhile, the actual evidence says something pretty close to “reduced antitrust may result in more cartel activity”—which Adam Smith said, too, and which is a far more limited claim.

The primary focus for Carrier in discussing antitrust and innovation is so-called “innovation markets.” These are, in essence, markets consisting of R&D (as opposed to the traditional antitrust analysis of product markets).  And as Carrier notes the theory behind innovation markets is that a merger between the only two, or two of a few, firms in R&D might increase the incentive to suppress at least one of the research paths.

That’s the theory, anyway.  But as Carrier himself points out (although he dismisses this criticism), “we do not know the market structure most conducive to innovation.”  We don’t know about the relationship between concentration and innovation what we know about the relationship between concentration and price—and we don’t even know much about that.  The evolution of unilateral effects analysis in modern merger thinking is that market concentration not a good predictor of effect.  Josh has a great forthcoming paper on this (forthcoming in a book Josh and I are editing together), and an early draft is here

The fundamental flaw in the innovation market concept is precisely this:  That we don’t know about the relationship between market structure and effect, and error costs are high.  The two—two—most fundamental flaws in the innovation market concept are precisely this:  That we don’t know about the relationship between market structure and effect, error costs are high, and competition is multidimensional.  The three—three—fundamental flaws in the innovation market concept are . . .

I won’t belabor the points here too much, but it’s pretty straightforward that a) increased concentration might actually be good for incentivizing R&D (increasing expected returns to investment), b) innovation is precisely where error costs are highest and you don’t have to believe all that Schumpeter wrote to get that, and c) competition is multidimensional, and while concentration might seem to harm consumers on one dimension, it may benefit them on another—and we don’t know the magnitude of the tradeoff, or even exactly how to make it.

On the first point, I would refer our readers to Schumpeter, of course, but also to another paper in our forthcoming book: “Rewarding Innovation Efficiently: The Case for Exclusive Rights,” by Vincenzo Denicolo and Luigi Franzoni.  The article demonstrates that, especially when innovations are large (in Carrier’s term, “drastic”), maximal rights (in the article, patent rights, but the concept should carry over into market structure, as well) incentivize optimal innovation, even though product market competition is weakened.   The odd thing is that Carrier draws precisely the opposite conclusion—that drastic innovations call for less, not more, concentration of returns.  But a significant body of literature suggests that in markets with leaders (monopolists) and endogenous entry (the more realistic assumption that entry is dependent on profitability rather than exogenously determined and independent of profitability), leaders will, if anything, overinvest in innovation.  See, for an excellent example, Federico Etro, “Endogenous Market Structure and Antitrust Policy.”An important point from that paper is summed up in a succinct quote from it:

A main point emerging from our analysis of the behavior of market leaders facing or not facing endogenous entry is that standard measures of the concentration of a market have no relation with the market power of the leaders and may lead to misleading welfare comparisons.

Just so, and I wonder why claims that market concentration are clearly bad for welfare, particularly in extremely ill-understood “innovation markets,” survive with no empirical support.

On the second, I would just point out that there is almost no discussion of error costs in the book—no discussion of bureaucratic agency issues, judicial process problems, public choice problems, and the like—other than to criticize excessive copyright protection for . . . precisely the same reason one might refrain from excessive antitrust enforcement.  Again, particularly when talking about unsettled concepts being enforced by imperfect agencies, I would like to see some more restraint.  To be fair, Carrier does try in several places to cabin the extent of his proposals (as I mentioned, (almost) confining the innovation market analysis to the pharmaceutical industry, for example), but I would have expected to see some justification for this cabining in clearer expressions of the kinds of institutional dysfunction that can systematically tar the antitrust enterprise.

Finally, on the third point (but more generally related to all three), referring to the critiques of the innovation market theory, Carrier writes:

There is an element of truth to each of these critiques.  In many cases we do not know all the potential innovators or the optimal relationship between R&D and innovation.  For that reason an expansive notion of the innovation-market concept is not appropriate.

How about the concept is not appropriate, full stop?  We’re talking about markets with—excuse my French—a lot of shit we don’t know.  Why are we intervening at all?  Why are we not, at most, attempting to incorporate a more dynamic analysis into our traditional assessment of product market structure and behavior?  Given the complex and poorly understood relationships between investment in R&D, market structure, price, quality, speed of innovation, and welfare effects, shouldn’t even the cabined notion of the innovation-market concept be viewed with extreme distrust?  Having set up the general framework, but then being forced to limit it to pharma mergers, I would like to see a firmer expression of uncertainty.

Let me finish with a comment on the applicability of the analysis even to the pharma industry: It’s not so clear cut, even there.  I’ll take just one example: Drastic versus nondrastic innovation.  Carrier claims that we do, in fact, know the optimal market structure for pharma in part because for drastic innovations (the sort common to pharma), “competition is superior to monopoly.”  I struggle to find the support for this contention in theory, but I know it is not true in practice that pharma trucks only in drastic innovation.  Of course, to some obvious extent this is true: Many of the most important innovations in the pharmaceutical industry are drastic.  But, in fact, although commonly dismissed by critics as a form of gaming the regulatory system, it is also true that pharmaceutical companies are constantly tweaking their products, changing chemical compositions slightly, changing pill coatings, changing dosages, etc.  These nondrastic changes, while certainly less, well, drastic, than the big breakthroughs, may be no less important.  The human body is a complex system, and I imagine Carrier and many other pharma industry critics are not physiologists.  I think the claim that these small adjustments amount to gaming the system and can be and should be deterred—or disregarded in designing the “optimal market structure” for the industry—is a faulty one, and a reflection more of what we don’t understand about complex, innovative industries than what we do.

I have much more to say about this thought-provoking book, but I’ll leave it for battle in the comments.

This post is from Phil Weiser (Colorado)

It is trite to say that “we are all Schumpeterians now.”  When it comes to appreciating the importance of innovation and entrepreneurship, however, we are.  Schumpeter, unfortunately, did not leave a theory of innovation that lends itself to easy application to public policy prescriptions, as Brad De Long has explained so clearly.  By so clearly highlighting the role that antitrust law and intellectual property policy can play in spurring innovation, Michael Carrier has done the field a great service.  Indeed, Mike has written an impressive, ambitious, and important book.  But in a post like this, I come not to praise him, but to take pot shots from the peanut gallery.

My first pot shot is one that Mike knows is coming—and footnote 143 reveals as much.  On Mike’s view, the Trinko/Credit Suisse double header is, at worst, benign and, at best, on the money.  This view of Trinko leads Mike to predict that the Supreme Court will take an aggressive posture as to “pay for delay” pharmaceutical settlements that have developed as an unintended consequence of the Hatch-Waxman Act.  The problem with this view is that Mike overlooks the most disturbing aspect of Trinko—it made the judgment about the effectiveness of the regulatory regime (in that case as to the FCC) on a motion to dismiss.  Notably, in the AT&T antitrust litigation, this issue was a question of fact and not presumed based on the mere presence of a regulatory regime.  I have the same concern about Credit Suisse, which took a generous view of the SEC’s regulatory effectiveness not long after that agency (and the self regulatory organization upon which it relied) failed to unearth a cartel arrangement at the NASDAQ that was only revealed through antitrust litigation.  But I have written about this before, as footnote 143 recounts.

My second point is to underscore a point Mike makes in regard to the Microsoft antitrust litigation—whether the presence of intellectual property rights (IPRs) should justify a firm’s decision to withhold access to application programming interfaces or protocols necessary to facilitate interoperability.  I agree with his conclusion that IPRs should not displace antitrust oversight.  Again, to invoke U.S. v. AT&T, consider that, had the relevant interconnection issue in that case involved patented interfaces, it would have come out differently under the theory pressed by Microsoft.  Given that software patents are controversial to begin with, awarding the recipient of a patent on an application programming interface or communications protocol a get-out-jail free card is hard to justify.  That said, I would have liked to see Mike develop his view of Microsoft case.  He may well have resisted doing so out of concerns related to space, a lack of historical distance, or that he was not sure what type of verdict to pronounce on the decree.  At a minimum, I believe if safe to say that the case underscores the challenges of “regulating interoperability,” of which the IPR issues are only a relatively small part of the overall equation.

For a final point, let me close on the discussion of standard setting organizations (SSOs).  The role of SSOs is potentially very important and, until recently, they operated with a limited degree of awareness of the regulatory challenges they face as to, among other issues, the threat of patent holdout.  As I have explained elsewhere, there is a strong argument that SSOs should be given the type of latitude that Mike calls for in facilitating cooperation and managing the behavior of individual firms.  Where Mike could drill down deeper, however, is to evaluate the institutional challenges of how to enforce commitments by firms participating in standard setting organizations to restrict their collection of royalties to reasonable and non-discrimination (RAND) terms.  Most question-begging is whether the FTC’s Section 5 authority will ultimately prove to be an important tool in this regard (as used in the N-Data case).  In the wake of the Supreme Court’s denial in the Rambus case, there will be undoubtedly more pressure for the FTC to use this tool.

Mike’s book provides lots of fodder for discussion and will provide policymakers with a rich set of proposals to evaluate.  I look forward to hearing his voice on these issues over the years ahead.

This post is from Dan Crane (Cardozo)

Congratulations to Mike on a very fine book, which I must admit I am still in the process of digesting.  I will confine my initial comments to Mike’s chapter on patent settlements (Chapter 15), which I understand will also be coming out as an article in the Michigan Law Review. 

Patent settlements involving “reverse payments” are a huge topic on which I and many others have spilled much ink already.  Representative Bobby Rush (President Obama’s erstwhile nemesis from Chicago’s South Side) has just introduced legislation that would ban reverse payments.  I will not regurgitate my entire spiel on patent settlements here, but instead just try to highlight my essential disagreement with Mike and others who focus reverse payment settlements between branded and generic pharmaceutical companies as a special antitrust problem. 

Mike would make reverse payments—where the branded drug company pays the generic to leave the market—presumptively illegal.  The settling parties would have a rebuttal right to demonstrate the reasonableness of the settlement in light of litigation costs, the generic’s cash-strapped financial position, the parties’ information asymmetries, and a catch-all reasonableness category.  Mike contemplates, however, that courts might eventually find that these kinds of justifications were too weak, insubstantial, or infrequent to justify allowing a rebuttal case and simply make reverse payments per se illegal.

My basic problem with Mike’s approach—and others that focus on reverse payment settlements as a unique species of antitrust problem—is that the social costs of anticompetitive patent settlements are only loosely correlated with the direction in which payment flows in the settlement.  To repeat a claim that I’ve made on many occasions, the social cost of allowing patent settlements that involve a cessation of competition between the branded and generic firm equals the social cost of the continuing branded monopoly (at least the deadweight loss, but include the wealth transfers if you like) times the probability that, but for the settlement, the generic would have won the patent infringement action and entered the market in competition with the branded firm.  There are also social costs to disallowing patent settlements, but let’s put those aside for now.

What is the relationship between the social cost of cessation of competition between the branded and generic and the fact that the settlement payment flows from the branded to the generic?  Nothing, unless the fact that the payment flows “abnormally” from the patentee-plaintiff to the infringer-defendant necessarily evidences that the plaintiff’s claim is weak, which in turn means that the branded’s probability of success in the infringement action is low and the social cost of the settlement is therefore high.  However, as I’ve explained at length elsewhere, there are good reasons—particularly given the structure of the Hatch-Waxman Act—why the payment flows in an “abnormal” direction even in a case in which there is a high probability that the patentee would have won the infringement action (and, consequently, the social cost of the settlement is relatively low).  In other words, the mere fact that a branded-generic settlement involves a “reverse payment” only weakly evidences the social cost of the settlement. 

So antitrust rules that focus on “reverse payment” settlements as a category run the risk of creating false positives, but they also run the risk of creating false negatives to the extent that they focus the inquiry on the direction in which consideration flows—a not terribly helpful spot.  It is often not hard to structure a branded-generic settlement in a way that does not involve reverse payments but still involves the key ingredients of social cost—the cessation of meaningful competition between the two firms and a low probability that a court would have enjoined the generic on patent infringement grounds.  Scott Hemphill’s empirical research on patent settlements following some of the early negative decisions (like the Sixth Circuit’s Cardizem decision holding reverse payments per se illegal) shows that creative lawyers are capable of crafting settlement agreements that have the same effects as the most pernicious reverse payment cases but would pass unscathed under a rule focusing on reverse payments.

Indeed, I have little doubt that if the Rush bill passes, antitrust lawyers will make a bundle of money restructuring patent settlement agreements to comply with the law.  Here are some suggested reverse payment ban avoidance schemes:

·         Branded retains Generic to become its exclusive manufacturing and distribution agent for the branded’s authorized generic.  Utilizing its newfound freedom under Leegin, Branded sets the resale price of the generic at an appropriate price-discriminatory discount off the branded price but nonetheless a monopoly price.  Branded continues to collect monopoly rents by making generic pay an exorbitant royalty or annual lump-sum fee.  If Generic can’t afford the payments up front, Branded provides financing.

·         Branded grants Generic an exclusive license to manufacture and distribute under the patent in Canada.  Generic charges a monopoly price in Canada (so no one bothers re-importing), Branded charges a monopoly price in the U.S.  There doesn’t need to be any explicit agreement that Generic won’t enter the U.S.—they get the point.

·         The Schering scheme—Generic licenses or sells Branded some worthless other drug for which Branded pays Generic some huge price.  Investment bankers are paid to say the drug was worth it.  Good luck litigating this as a reverse payment case–something like this worked in Schering.

I could go on but the basic point is that the creativity of high-paid New York lawyers exceeds the foresight of anyone drafting legislation in this area.  As much as I agree with Mike that patent settlements involving the cessation of competition between branded and generic firms are a big problem, the focus on reverse payments is off the mark.

Welcome to the first TOTM Blog Symposium.  This is a format we hope to make more use of on TOTM in the future and we’ve got an ideal project to start with.  For the next two days (and maybe three) we’ll be discussing Professor Michael Carrier’s (Rutgers) forthcoming book: Innovation for the 21st Century: Harnessing the Power of Intellectual Property and Antitrust Law.  We’ve invited a number of leading commentators in both intellectual property and antitrust law to contribute to the symposium.  I’m thrilled that each has agreed to participate.  The lineup includes: Dan Crane (University of Chicago/ Cardozo), Geoff Manne (TOTM/LECG), Phil Weiser (Colorado), Dennis Crouch (Patently-O/Missouri), Brett Frischmann (Cornell/ Loyola), F. Scott Kieff (Wash U./ Hoover/ and on his way to GW), Mike Carrier, and me.

The format will be as follows.  Today we’ll have posts from Crane, Manne, Weiser, and Wright on aspects of Innovation for the 21st Century which focus on competition policy.  Tomorrow, Professors Frischmann, Kieff, and Crouch will focus on the intellectual property related proposals.  Professor Carrier will have the opportunity to respond to the posts Tuesday evening or Wednesday.  And of course, we hope that both participants and our normal group of high quality commentators will find some time to mix it up in the comments.  The participants have been given broad leeway to discuss general themes in Carrier’s work or hone in on specific policy proposals.

With the formalities out of the way, you can expect the first of Monday’s posts to start in the early morning and then we’ll add throughout the day with posts from Crane, Manne, and Wright.

See you soon.

On March 30th and 31st, TOTM will hold its first blog symposium.  The topic will be Michael Carrier’s (Rutgers) forthcoming book: Innovation for the 21st Century: Harnessing the Power of Intellectual Property and Antitrust Law (from Oxford University Press).

carrierphotl.jpg

We’ve invited a number of leading scholars from the fields of antitrust and intellectual property to comment on Professor Carrier’s book.  Here is a description of the book’s contents from Professor Carrier:

Innovation for the 21st Century offers ten proposals, from pharmaceuticals to peer-to-peer software, that will help foster innovation.  Of the ten proposals, three target antitrust topics that may be of interest to your readers: (1) settlement agreements between brand and generic firms in the pharmaceutical industry, (2) an innovation-markets framework to be applied to pharmaceutical mergers in which the “products” are in preclinical or clinical trials, and (3) standard-setting.  The book also offers a primer on patent, copyright, and antitrust law, as well as the IP-antitrust intersection.

On Monday, March 30th, we will focus primarily on the antitrust aspects of Carrier’s proposals.  The four discussants will be: Dan Crane (University of Chicago/ Cardozo), Geoff Manne (TOTM/LECG), Phil Weiser (Colorado), and yours truly.

On Tuesday, March 31st, we will focus primarily on the intellectual property aspects of Carrier’s work.  The three discussants will be:  Dennis Crouch (Patently-O/Missouri), Brett Frischmann (Cornell/ Loyola), and F. Scott Kieff (Wash U./ Hoover/ and on his way to GW).

On Tuesday afternoon or Wednesday morning (depending on the length of the posts), Carrier will post a response.  In the meantime, I do hope that the participants, Professor Carrier, our normal cadre of excellent commenters will mix it up in the comments throughout (mix it up, of course, in the civil and respectful tone that we usually see here).

I want to thank this great lineup of antitrust and IP scholars for agreeing to participate.  It should be a lot of fun.

The symposium will be a joint production, thanks to Dennis Crouch, with posts going up both here and Patently-O.

More details to be announced soon. For now, buy the book!  See you on March 30th and 31st.