Professor Carrier’s Response

Cite this Article
Michael A. Carrier, Professor Carrier’s Response, Truth on the Market (April 01, 2009), https://truthonthemarket.com/2009/04/01/professor-carriers-response/

This article is a part of the Innovation for the 21st Century Symposium symposium.

First of all, I would like to express my deepest gratitude to Josh Wright. Only because of Josh’s creativity and tireless, flawless execution did this blog symposium come about and run so smoothly. I also would like to thank Dennis Crouch, who has generously cross-posted the symposium at PatentlyO. And I am grateful for the attention of the communities at TOTM and PatentlyO, which have patiently scrolled through countless pages and posts to learn about my book.

Finally, I would like to thank Dan Crane, Dennis Crouch, Brett Frischmann, Scott Kieff, Geoff Manne, Phil Weiser, and Josh Wright for their insightful and incisive comments. Though they each had busy schedules, they managed to squeeze in a look at some or all of a book that is not the shortest ever written. And wasting no time, they focused like a laser on the book’s most ambitious proposals, as well as its omissions. If I didn’t know better, I would think that the commentators divided the market of my book to minimize overlap in treatment. I do know better, though, enough to know that the breadth of critiques and lack of overlap reflect Josh’s skill in putting together such a diverse and impressive group of commentators.

Without further ado, let me address the comments by substantive area, starting with antitrust law, proceeding through patent and copyright law, and concluding with the most general critiques.

Geoff Manne’s Antitrust Comments

We begin with Geoff Manne’s comments. At the end of my post, I will respond to his general critiques. In this section, I address his antitrust comments, and in the next I turn to innovation markets.

Geoff’s first critique involves my use of the “pendulum” metaphor for antitrust’s history. Let me offer two responses.
First, my general antitrust history, which is limited to four pages (pages 61-64), does refer to a “pendulum,” but in the context of judicial analysis. In contrast, Commissioner Kovacic specifically laments commentators’ exaggeration of the role played by high-level agency appointments, as well as the “too hot” enforcement of the 1960s and 1970s, “too cold” enforcement of the 1980s, and “just right” enforcement of the 1990s.

My primary focus, instead (and in contrast to most of the commentators using the general pendulum metaphor) is the IP-antitrust intersection, which makes up two chapters of the book (pages 71-99). Commissioner Kovacic does not address the intersection in his article. And, more relevant, whether we call the history of IP-antitrust analysis a “pendulum” or an “evolution” (to aggressive antitrust enforcement and back) should not matter much. For as a descriptive matter, there certainly has been a shift from courts that refused to impose antitrust liability for patent-based activity (1890-1912) to courts that more searchingly applied patent misuse and antitrust (1912-1960s) to courts that applied a more deferential, economics-based approach (1977-present).

More important than the descriptive name we append to the history, however, is what we do with it. Geoff concludes that the book “is no exception” to the trend that “everyone who adopts the pendulum narrative does so to make the point that today’s antitrust enforcement is too lax and should be beefed up.” With respect, that is not the case.

Nowhere do I conclude that antitrust needs to be “beefed up” across the board of its innovation-related scrutiny. To show just how far we’ve come, and how limited is the canvas on which my antitrust proposals appear, let me set the stage (quoting from page 292):

Promoting innovation has not traditionally been one of antitrust’s top priorities. In the mid-20th century, courts adopted a rigid stance toward IP, automatically condemning tying and licensing arrangements. In the 1970s, the Justice Department followed a “Nine No-No’s” policy that assumed that an array of harmless licensing activities violated the antitrust laws.

By the 1980s, the tide had turn. Courts applied the more lenient Rule of Reason to licensing arrangements and upheld blanket licenses containing price fixing. Congress enacted legislation creating a federal court to hear patent appeals, requiring Rule-of-Reason analysis for joint ventures engaging in research and development, and limiting the range of activities that could demonstrate patent misuse.
By the 1990s, innovation was even more explicitly recognized. The antitrust agencies jointly issued Guidelines for the Licensing of Intellectual Property that appreciated the procompetitive benefits of licensing and recognized that IP does not necessarily reflect market power. More enlightened analysis of business activity, including patent pools, standard setting organizations, and new product introductions, conformed to this approach.

Because of this advance, the breadth of my antitrust proposals is far less than it would have been a generation ago. There is no urgent need, for example, to address licensing or patent pools. I conclude that antitrust only needs three recommendations to improve its treatment of innovation. And one of those proposals encourages the agencies and courts to continue on their path of not punishing the activities of standard-setting organizations.

Looking out across the universe of antitrust’s treatment of IP/innovation issues, I concluded that pharmaceutical patent settlements between brand and generic firms presented the setting in which more aggressive antitrust enforcement was most necessary. And while I offer a framework for innovation markets (which could be viewed as increasing enforcement over a “no innovation markets” baseline), the analysis does not necessarily lead to more aggressive treatment, as revealed by my dissents from the multiple FTC innovation market challenges in which the merging firms had drugs in preclinical studies.

A final point stems from Geoff’s statement that “this book is largely about unilateral conduct (and to a lesser extent mergers) [as opposed to] cartels.” As a result, “it’s not at all clear . . . that [Jonathan] Baker’s work [defending antitrust] refutes the relevant portions of Crandall & Winston [calling into question the need for antitrust].” Leaving aside the independent critiques of Crandall & Winston that I synthesize on page 66, cartels do present the relevant framework for my treatment of settlements and (at least the coordinated elements of) standard-setting.

One relevant case study is provided by payments from brand-name drug firms to generics to settle patent litigation and delay entering the market (a subject I discuss in detail below).  These payments disappeared when first challenged, only to reappear when the antitrust coast was clear.  Between 1992 and 1999, 8 of the 14 final settlements between brands and generic first-filers involved reverse payments.  In 2000, the FTC announced that it would challenge such settlements.  In the succeeding four years, between 2000 and 2004, not one of 20 reported agreements involved a brand firm paying a generic filer to delay entering the market.  During this period, parties continued settling their disputes, but in ways less restrictive of competition, such as through licenses allowing early generic entry.

In 2005, after the Schering and Tamoxifen courts took a lenient view of these agreements, the reverse payment floodgates opened.  In 2005, 3 of 11 final settlements (27 percent) between brand-name and generic firms included such payments.  In 2006, 14 of 28 settlements (50 percent) contained these provisions.  And in 2007, 14 of 33 settlements (42 percent) included such compensation.  Equally concerning, in the past two years, roughly 70 to 80 percent of settlements between brand firms and first generic filers have involved reverse payments.  In short, cartels present a more appropriate framework for reverse payment agreements than unilateral conduct.

Is My Treatment of Innovation Markets Too Innovative?

For more specific antitrust analysis, Geoff turns to innovation markets, offering three critiques.
First, he is unclear about the scope of my proposal. In case there is any ambiguity in the book, let me be completely clear: my innovation markets framework applies only to the pharmaceutical industry. This setting provides a unique opportunity to address concerns that have been leveled against the concept. While it is conceivable that an innovation markets framework could apply outside the pharmaceutical industry, my framework is not so designed.

Second, to the extent I offer “little more than a stylized merger analysis” under the Horizontal Merger Guidelines, that actually is an improvement over the current state of affairs. The agencies’ approach can be gleaned only from consent decrees. And in this setting, the FTC (which is responsible for antitrust enforcement in the pharmaceutical industry) has not explicitly considered many of the relevant factors that I suggest.

Each of the five steps in my framework promises to improve innovation markets analysis. The first step, evaluating market concentration, incorporates the realities of innovation in the pharmaceutical industry. A firm in preclinical studies, with roughly a 1 in 4,000 chance of reaching the market, offers far fewer concerns than a firm in Stage III of clinical studies with (on average) a 57 percent likelihood of reaching the market.
This is revealed through my second step, which assesses competitive harm. The theory behind innovation markets—that a merger between the only two, or two of a few, firms in research and development (R&D) might increase the incentive to suppress at least one of the research paths (page 297)—applies more directly to firms that are closer to the market, as these firms have a heightened incentive and ability to suppress R&D paths.

Third, the merging firms can rebut the agencies’ claim of concentration by showing that at least one other firm is likely to reach the market.

Fourth, the merging firms can proffer an efficiencies defense. Fifth, a “Schumpeterian” defense can be offered by small firms that would not otherwise be able to navigate the regulatory process.

Incorporating these stages in the five-part framework carves out space in the analysis for factors that are crucial but have not been explicitly considered in the analysis. I turn to case studies in the book to flesh out these points. For example, I conclude that the FTC should not have challenged the innovation market for “CD4-based therapeutics for the treatment of AIDS and HIV infection” in the Roche-Genentech merger since Roche was in preclinical studies and Genentech was in Phase I. In contrast, the FTC correctly challenged the merger between Baxter and Immuno, in part because—in the market for fibrin sealants (which are used to stop bleeding) —each firm was at least in Phase II.

Pivoting to defenses that the merging parties could offer, I show that likely entry by two nonmerging parties in Phase III supports my conclusion that the FTC should not have challenged the merger between Pfizer and Warner-Lambert in the market for an inhibitor for solid cancerous tumors. And I explain why an increased likelihood that a new product will reach the market should count as an efficiency in the setting of fatal, difficult-to-treat diseases such as Pompe Disease. Given that the FTC in 2004 split 3-1-1 on the issue whether to challenge the merger of Genzyme and Novazyme, which were developing treatments for this disease, such a framework could prove helpful.

Third, Geoff points to the “fundamental flaw” in innovation markets: “[t]hat we don’t know about the relationship between market structure and effect.” I agree that there is no simple answer to the question of which market structure is most conducive to innovation.  But in recent years, scholars—such as Rich Gilbert, Jonathan Baker, and I—have explored this issue in more fine-grained settings.  It is along these lines that several of the factors tilting ideal market structures in the direction of monopoly or competition show the importance of the latter in pharmaceutical innovation. Without repeating all my arguments from the book, elements of the industry that reflect competition’s significance include the prevalence of products (rather than processes), high rate of technological opportunity, and appropriability (pages 300-03).

By conducting the analysis at the level of the industry, I aim to avoid the paralyzing uncertainty posed by a single, unknowable relationship between market structure and innovation. At the same time, a review of the FTC’s innovation market challenges uncovers common characteristics that provide significant assistance in analyzing these issues.

One example that Geoff focuses upon involves drastic innovation, which (stated most simply) displaces demand for the existing product. There is a vast literature on the issue (though the Denicolo and Franzoni paper Geoff cites does not address drastic innovation, but rather the importance of the innovation, as measured by the “level of the investments” it is able to attract). Also as an aside, patent rights do not “carry over into market structure”—in fact, patent protection often reduces the need for other mechanisms, such as size, to appropriate investments.
My use of the concept of drastic innovation underscores the benefits of competition where one of the merging firms developing the next product generation has a monopoly in the existing product market. A quick look at the two innovation market challenges in which this concept could be applied is instructive.

In the first, the FTC challenged Glaxo and Wellcome’s merger, which encompassed the R&D market for noninjectable treatment for migraine headaches. Because Glaxo already possessed a monopoly on injectable migraine treatment, it would have a natural incentive to suppress the new product. In the second (which I discuss here), Glaxo Wellcome would be tempted to suppress its prophylactic herpes vaccine (which it and merging partner SmithKlineBeecham were researching) so as not to cannibalize sales from its current monopoly in a herpes-suppression drug.

Josh Wright, Standard-Setting, and Section 2

Turning to the next antitrust chapter, Josh Wright focuses like a laser on the standard-setting issues I do not address in the book. I will discuss all of these issues shortly. But first, a quick explanation for my construction of the chapter is in order.

My primary goal in the standards chapter was to demonstrate the need for antitrust to defer to standards and the activities of standard-setting organizations (SSOs). By explaining the procompetitive effects of SSOs, as well as the essential role played by IP rules, such as licensing and disclosure rules, I sought to make the strongest, cleanest case for antitrust to continue its deference and to gain support from as many readers as possible.

The downside of seeking consensus, of course, is sweeping some issues under the rug. Josh looks under the rug, and reasonably asks my views on several important, cutting-edge standards issues.

First, he asks which of two activities—deception or breach—could form the basis of a Section 2 monopolization violation. I am comfortable asserting that Section 2 can apply to cases of deception. Where the defendant deceives the SSO and attains monopoly power as a result, Section 2 liability could be appropriate.

Turning to “breach,” let me parse this category to distinguish two scenarios. In the first, similar to the N-Data case I discuss below, the patentee increases the price after the standard has already been locked in and the firm has gained monopoly power. This would be controlled by NYNEX as an example of the “exercise of market power that is lawfully in the hands of a monopolist.” The conclusion that Section 2 does not apply opens the door for Josh’s important work (with Bruce Kobayashi), which advocates the use of patent law’s equitable estoppel doctrine, as well as state contract and tort law to fill the gap.

The second scenario, however, seems to fall somewhere between breach and deception. How should we categorize a patentee’s promise to accept RAND (reasonable and nondiscriminatory) licensing in the process of standard selection, followed by its subsequent imposition of royalty terms arguably not consistent with the RAND commitment? This conduct may or may not be deception, depending on the facts of the case. But it could play a role in attaining monopoly power. For the patentee’s commitment to accept RAND terms could have been central in its selection in the standard.

Of course, the difficulties of determining RAND and challenges facing courts taking on this task warrant great caution. Nonetheless, Section 2 liability could apply since the patentee’s RAND commitment could have played a role in having its patent incorporated into the standard (and gaining monopoly power). In this setting, the activity would seem to fall outside the four corners of NYNEX.

These issues lead naturally to the second issue: causation. Here, I would argue (as PatentlyO readers may recall I did a little while ago) for a standard that is broader than that set forth by the D.C. Circuit in its 2008 Rambus decision. The court in that case found that the FTC did not sufficiently show causation because it was possible that the SSO could have incorporated Rambus’s technology into the standard even if its IP had been disclosed. In other words, the lack of disclosure could not definitively be pinpointed as the catalyst for monopoly power. The D.C. Circuit in essence applied a “but for” standard by which the plaintiff would need to show that the monopolist’s deceptive conduct was the sole reason it acquired monopoly power.

The bar, however, need not be set so high. The D.C. Circuit’s 2001 Microsoft decision offers a standard that would appear to be more reasonable. In that case, the court explained that “neither plaintiffs nor the court can confidently reconstruct a product’s hypothetical technological development in a world absent the defendant’s exclusionary conduct.” Because “the defendant is made to suffer the uncertain consequences of its own undesirable conduct,” the court inferred causation where exclusionary conduct “reasonably appear[ed] capable of making a significant contribution to . . . maintaining monopoly power.”

The D.C. Circuit’s “but for” causation standard in Rambus presents particular challenges in the context of SSOs, which often consist of numerous participants and multiple competing technologies. In this setting, it is difficult to delineate the precise cause of monopoly power. Compounding the challenges are the divergent levels of need for different patents. Stated most simply, deception could be the reason a weak patent is included in a standard, but could play a less direct role for strong patents that are essential to the standard.

In short, I would not require the plaintiff to show that the deception was the “but for” cause of the monopoly power. Instead, I would support an approach more consistent with the D.C. Circuit’s Microsoft opinion.

Third (and briefly), in case it is not clear by now, I agree with Josh’s assertion that NYNEX covers the pricing of a firm that already has monopoly power. For that reason, if the patentee raises its price after its patent has already been selected for inclusion in the standard, then NYNEX would appear to block Section 2 liability. Looking backward, this observation overlaps with the issues discussed above. Looking forward, it leads to N-Data.

Josh and Phil Weiser’s (Section) 5-Ton Elephant in the Room

Josh and Phil Weiser each point to the Section 5 elephant in the room. In January 2008, the FTC filed a complaint against Negotiated Data Solutions (N-Data). N-Data licensed patents used in equipment employing Ethernet, a networking standard. N-Data’s predecessor had committed to license its technology for a one-time royalty of $1,000 per licensee. But N-Data later demanded royalties “far in excess of that commitment.”

By a vote of 3-2, the Federal Trade Commission challenged N-Data’s action. It did not allege a violation of the Sherman Act, but instead claimed an unfair method of competition and unfair act or practice under Section 5 of the Federal Trade Commission Act. The majority asserted that N-Data’s behavior harmed consumers and businesses and explained that its exercise of its “unique” authority was needed to “preserv[e] a free and dynamic marketplace.”

Deborah Platt Majoras, then-Chairman of the FTC, dissented, worrying that the majority did not “identif[y] a meaningful limiting principle” for determining an unfair method of competition. Commissioner William Kovacic also dissented, stating that the majority’s failure to distinguish between its two theories of liability—unfair methods of competition and unfair acts or practices—masked weaknesses in its challenge.

At Josh and Phil’s appropriate urging, let me weigh in. I believe there is a role for Section 5 of the FTC Act. I think it is beneficial that the FTC is considering how to justifiably apply this provision. But I am not convinced that the majority in N-Data adequately set forth a framework that justified the application of Section 5.

One concern with Section 5 is that it should not automatically assume the role of a backstop for antitrust claims that come close but do not quite satisfy the Sherman Act. Antitrust principles have developed through a rich and exhaustive common law process, and Section 5 should not serve as the “minor league” version of the Sherman Act’s “major league.” My concern with applying Section 5 to breach cases (for which this presents a straightforward case devoid of the RAND promise I mentioned above) is that there should be a standard that justifies such enforcement.

Also contributing to concern here are facts revealed in Chairman Majoras’s dissent, including (1) an initial level of royalties that was nominal (2) set by a predecessor eight years ago (3) for a product for which no licenses were sought in the eight-year period and (4) for a royalty increase to which the SSO’s Patent Administrator did not object.

Phil’s Interoperability and Microsoft

Keeping with the theme of antitrust issues that could have been more fully developed in the book, Phil astutely observes that I discuss, but do not offer proposals for, the Microsoft case.

Of all the facets of the Microsoft case, the European Union’s case most directly implicates the IP-antitrust intersection. The facts, which are complicated, are explored more fully in the book (pages 89-92). For now, suffice it to say that Microsoft denied rivals information needed to connect non-Microsoft work group servers (which provide services used by office workers such as file and print sharing) with Windows computers and servers.

Microsoft claimed that its protocols and specifications (which provide the rules of interconnection and other documentation) were protected by patents, copyrights, and trade secrets. The big question, of course, is whether Microsoft should be compelled to share its IP-protected interfaces. The natural framework in which courts address these issues is the essential facilities doctrine, which provides that a monopolist cannot deny to its competitors facilities that are necessary to compete in a particular market. Assuming (in the U.S.) that this doctrine survives Trinko, the question is whether the denial of a firm’s IP violates it.

In nearly all cases, the answer should be no. The right to exclude is the core of the IP right, and thus should ordinarily be viewed as sacrosanct. Of additional concern is that IP essential facility claims tempt courts to force the sharing of helpful (albeit not essential) facilities. (For an example, see Intergraph v. Intel, in which a district court—before being reversed by the Federal Circuit—found that access to business information could constitute an essential facility.)

But the fact that IP-protected products should almost never be treated as an essential facility does not mean that they should never be so treated. For the Microsoft case raises a setting in which such claims should carefully be considered. The European Union has followed such an approach, finding that, under Article 82, a refusal to deal could constitute an abuse of dominance in “exceptional circumstances,” such as where a refusal (1) relates to a product indispensable to behavior on a neighboring market, (2) excludes competition on that market, and (3) prevents the appearance of a new product for which there is potential consumer demand.

It is hard to see how such a framework would be considered by U.S. courts today. But should it be? That question can be answered only by considering the benefits of (strictly) imposing liability for monopolists’ conduct that prevents interoperability as well as the administrative and error costs that would accompany such a framework. The costs of such a framework would be high. For there is no guarantee that the analysis would be applied with the level of strict scrutiny that is required. Loose interpretations of indispensability, in particular, would be dangerous.
But, again, interoperability has significant benefits (which I discuss at pages 167-70). The literature has recently offered two important accounts of the concept. In one, Phil explores the relationship between platforms and applications. In the second, Pamela Samuelson recounts the existence of multiple impediments to interoperability.

I cannot resolve all the issues presented by interoperability and administrative/error costs in this space. But in the end, Phil is right that this presents an important issue for ongoing debate, and the two articles offer a reasonable starting point for exploring at least the issues related to interoperability.

Dan Crane and the Direction of Drug Patent Settlements

Turning to the last antitrust chapter, Dan Crane explores settlement agreements by which brand-name pharmaceutical companies pay generic firms to drop patent challenges and delay entering the market. Dan is right that the direction of the payment, by itself, is not what is suspicious about brand-name pharmaceutical companies’ payments to generic firms for delay.

Rather, in the context of settlements under the Hatch-Waxman Act, three characteristics raise concern. First, in contrast to other patent settlements—by which an alleged infringer pays the patentee and enters the market—the generic agrees not to enter the market, which more directly threatens competition.

Second is the unique setting provided by the Hatch-Waxman Act. As I discuss in detail in the book (pages 347-57), the Act’s drafters crafted a nuanced regime that addressed many of the concerns that existed at the time of enactment in 1984.

They fostered innovation by providing brand-name drug companies with patent term extensions, nonpatent market exclusivity (for new chemical entities and new clinical investigations), and an automatic 30-month stay for brand firms that sued generics that had challenged the patent’s invalidity or claimed noninfringement.

At the same time, they fostered competition by (1) allowing generics to rely on brand firms’ studies, thereby accelerating entry, (2) resuscitating the experimental use defense by overturning Roche v. Bolar and exempting from infringement the manufacture, use, or sale of a patented invention for uses “reasonably related to the development and submission of information” under the FDA Act, and (3) encouraging generics to challenge invalid or noninfringed patents by creating a 180-day period of marketing exclusivity for the first generic firm to do so. This last element is crucial. One of the central goals motivating the drafters was to ensure the provision of “low-cost, generic drugs for millions of Americans.” Generic competition would “do more to contain the cost of elderly care than perhaps anything else this Congress has passed.” Generic challenges to brand patents thus are a central aspect of the Act. Settlements by which generics agree not to challenge patents threaten the drafters’ intentions.

Third, in the Hatch-Waxman setting, reverse payments are often the only indicator of a patent’s invalidity or lack of infringement. At the risk of oversimplifying, settlements within the scope of a valid patent are legitimate. Settlements dividing markets under cover of an invalid patent are not.

But the most direct way to determine these issues, patent litigation, cannot be utilized. For the significant analysis and testimony on complex issues—such as patent claim interpretation and infringement analysis—cannot be inserted as mini-trials in antitrust litigation. Nor would an analysis of the merits of the patent infringement case even be reliable: after a case settles, the parties’ interests become aligned, with a generic firm lacking the incentive to vigorously attack a patent’s validity or challenge a claim of infringement.

In many cases, therefore, reverse payments offer crucial indirect evidence of a patent’s invalidity. Brands that pay generics more than they ever could have gained from entering the market raise red flags of potential invalidity. Further hoisting such flags are the parties’ aligned incentives. Because the brand makes more by keeping the generic out of the market than the two parties would receive by competing in the market, the parties have an incentive to split the monopoly profits, making each better off than if the generic had entered.

What is particularly concerning about reverse payments is not the direction of the payment. Instead, it is that the payments often make possible agreements that do not reflect the parties’ reasonable assessment of success in patent litigation.

Let me offer an example. An agreement concerning the generic’s entry date, without any cash payment, often reflects the odds of the parties’ success in patent litigation. By way of example, if there were 10 years remaining in the patent term and the parties agreed there was a 60 percent chance that a court would uphold the patent’s validity, the mean probable date of entry under litigation would occur in 6 years.
A brand is likely to gain additional exclusivity by supplementing the parties’ entry date agreement with a payment to the generic. Continuing the example above, the brand could pay the generic to gain an additional 3 years (for a total of 9 years) of exclusivity. The monopoly profits the brand earned in these 3 years would vastly exceed the reduced profits it would earn from sharing the market with the generic. Even with a payment to the generic, the brand would still come out ahead. And the generic would also benefit since the payment would exceed the profits it could have gained by entering the market.

In buying more exclusivity than the patent alone could provide, reverse payments tend not to reflect an objective assessment of validity. In most cases, the patentee would not pay more than its litigation costs unless it believed it was buying later generic entry than litigation would provide. Notice that I said “most” and not “all.” In the book, my presumption of illegality for reverse payments is rebuttable, and I allow the parties to rebut it in several settings in which they could demonstrate the payment’s reasonableness (pages 378-82).

Dan’s attention to the recent wave of settlements lends further support to placing the burden on the settling parties to demonstrate that the payment reflects a reasonable assessment of success in the patent infringement case. No longer are brand firms making simple cash payments for generics not to enter the market. Instead, they are paying generics for IP licenses, for the supply of raw materials or finished products, and for helping to promote products. They are paying milestones, up-front payments, and development fees for unrelated products. And, in the latest trend, they are agreeing not to launch authorized, brand-sponsored, generics.

Many of these provisions—such as a supply agreement by which a brand pays a generic even if it does not supply the product—exceed the fair market value for the item. Of particular concern, side payments appeared in nearly all the settlements that restrained generic entry but few of the settlements that did not. Nor is the product provided by the generic typically even one that the brand had sought before settlement.

Congressman Rush’s proposed legislation would prohibit agreements by which a generic firm receives “anything of value” in exchange for not researching, developing, manufacturing, marketing, or selling the generic product. Such a formulation would cover not only the initial wave of direct payments from brand to generic but also the recent wave of “side deals.”

In short, I agree with Dan that the direction of the payments does not, by itself, warrant close scrutiny. I also agree that—due to the Hatch-Waxman framework—it has been typical for the direction to flow from brand-patentee to generic-infringer.

I part ways from Dan, however, in considering in my analysis (1) the importance of competition and generic patent challenges at the heart of the Hatch-Waxman Act, (2) the unique position of reverse payments in determining patent validity in this context, and (3) the latest wave of settlements, which create ever more numerous versions of “three-drug Monte.”

Will the Supreme Court Really Use Trinko to Invalidate Settlements?

Phil raises the important point that the Supreme Court might not be inclined to apply Trinko to expand antitrust liability in the context of reverse payment settlements.

For starters, Trinko was decided on a motion to dismiss. Phil and I have both written about this. In the case, incumbent local exchange carrier (ILEC) Verizon refused to share its network with rivals. In assuming the efficacy of the Telecommunications Act of 1996, the Court pointed to penalties and reporting requirements imposed on Verizon. But as applied to billion-dollar industries, the agency’s fines—as former FCC Chairman Michael Powell has explained—“are trivial [and] are the cost of doing business to many of the[] companies.” Given that the Court addressed these issues at the motion-to-dismiss stage, it seemed only to assume the effectiveness of the regulatory regime’s remedies.
Applying this framework to Hatch-Waxman, it is possible that the Court could assume that the regime is effective. While possible, that approach would neglect the unavoidable presence of settlements, which dispense with the promotion of competition and patent challenges at the heart of the Act. It would be harder, in other words, to sweep effectiveness under the rug when the very class of agreements takes such direct aim at the Act’s purposes.

If I can expand the point, I recognize that my proposal is ambitious. While Trinko has engendered significant commentary, none of that commentary has yet advocated a new tool for plaintiffs! I need to be clear, then, that I don’t think the Supreme Court—if it were interested in applying Trinko to pharmaceutical settlements—would necessarily come out my way. In fact, given the trend in the Court, which has proven beneficial to antitrust defendants in recent years, the odds may well be against me.

But my proposal is not based on what I predict the Court is likely to do. Instead, it teases out the importance of a factor that, until now, has received insufficient attention: the effectiveness of a regulatory regime. The Court has focused on the existence of such a regime in downplaying the need for antitrust. But before it forces antitrust to step down, the Court should direct some inquiry to the effectiveness of the regulatory regime. For if it does not, then it is dispensing with antitrust in a setting in which regulation was not effective.

To be sure, difficult issues could arise where Congress deliberately creates an ineffective regime. But Hatch-Waxman does not confront such issues. The drafters themselves lamented reverse payments, with Senator Hatch finding such agreements “appalling” and Representative Waxman explaining that such agreements were an “unfortunate, unintended consequence” of the Act that “turned . . . the law . . . on its head.” In short, this is not a setting in which there are close calls about whether the drafters intended to create a regulatory regime that fostered competition and patent challenges.

Dennis Crouch and the Patent Proposals

Turning to patent law, Dennis Crouch raises several points about my patent recommendations. Let me address the four major ones.
The first is the most far-reaching. Dennis concludes that I “rather consistently choose[] sides in favor of weaker patents.” I am not certain that is the case. As I discuss below in response to Scott Kieff’s post, I did not include many potential proposals—covering patentable subject matter, nonobviousness, and a robust experimental use defense, to name just a few—that an array of patent scholars has offered in recent years and that would have more significantly weakened patents.

In addition, the three patent proposals I offer do not consistently favor weaker patents. The first, to be sure, could be placed in such a category (although my proposal is far from the only one to recommend a post-grant opposition system). The second clarifies existing case law, fleshing out the framework for relief that the Supreme Court articulated in eBay. And the third explains why more aggressive proposals for experimental use in the setting of biotechnology research tools are not appropriate at this time. This last setting stands in contrast to my more ambitious proposal for material transfer agreements (MTAs), which implicate patents far less directly.

Second, Dennis raises a significant practical comment: Is the game of preventing holdup worth the candle of increased litigation costs and potential reduced innovation incentives? Rather than debate this on a theoretical plane, let me offer some examples that arise from an application of eBay and MedImmune.

In emphasizing the default position of injunctive relief but recognizing the propriety of damages in certain settings, I offer a proposal consistent with that articulated by the Supreme Court in eBay. In making clear what factors the courts should consider in applying the four-part framework for determining appropriate relief, my proposal could provide guidance to lower courts.

And as the post-eBay cases reveal, there have been several cases in which (1) the patentee does not directly compete with the alleged infringer, (2) the infringed claims make up a small part of the defendant’s product, and (3) a defendant would suffer greater hardship from the grant of an injunction than a plaintiff would suffer from its denial. See Paice v. Toyota (defendant Toyota’s hybrid vehicles infringed plaintiff Paice’s patents, which implicated only a part of the hybrid transmission among the tens of thousands of parts making up a typical car, with injunctive relief threatening adverse effects on third party dealers and suppliers); z4 v. Microsoft (Microsoft infringed z4’s product activation software that was a “very small component” and not related to the “core functionality” of Windows in a setting in which injunctive relief would have required Microsoft to release new versions of its Windows software in 600 variations in more than 40 languages).

MedImmune paved the way for cases like Teva v. Novartis, which could have significant effects in the Hatch-Waxman setting. Here (oversimplifying slightly), the first generic to challenge a brand patent’s validity or claim noninfringement is entitled to a 180-day period of marketing exclusivity. The temptation is for the brand and first-filing generic to settle patent litigation with the generic agreeing not to enter the market. The bottleneck arises because if the brand decides not to sue other generics and the first-filing generic does not enter the market, then subsequent generics cannot enter. By increasing the scope of declaratory judgment actions, these generics might be able to sue the brand and ultimately enter the market.

Third, to be clear, I intended to limit my argument about the benefits of challenging invalid patents to invalid patents. While others have called into question the entire patent system, that is not (and never has been) my goal. In contrast, I recognize the importance of patents (especially in the pharmaceuticals and biotechnology industries – see page 47).

Fourth, my post-grant opposition system—which Dennis notes is similar to that contained in the Patent Reform Act of 2009 (though not identical, as my broader windows for challenge attest)—does not consider the PTO’s “current mantra favoring rejection.” Innovation incentives could be affected by a more robust mechanism for challenging patents in this environment.

But there can be no question that numerous invalid patents have been issued. While the subset of litigated patents does not precisely reflect the universe of all patents, the findings of Allison/Lemley, Moore, and the University of Houston’s PATSTATS that roughly 30 to 50 percent of litigated patents are invalid would lend strong support to a mechanism to reduce the incidence of such patents.

In addition, I build into my opposition system various measures (such as one-way fee shifting mechanisms and a system based on maintenance fees) that could allow the regime to be calibrated to reduce adverse effects on innovation incentives. But given the prevalence of invalid patents, together with less-than-ideal current alternatives (initial patent application review, validity litigation, and reexamination), post-grant opposition makes sense.

Are Material Transfer Agreements Really Material to an IP/Antitrust Book?

Many scientists need tangible materials for their research. Unlike the situation of patented research tools, scientists often cannot circumvent a refusal to license materials. In providing materials, the owners frequently require recipients to enter into material transfer agreements (MTAs).

There are two responses to Dennis’s question as to why I cover MTAs in the book. First, patent issues sometimes are implicated, as reach-through provisions attest. Second, MTAs offer a useful comparison to patented research tools by revealing empirical evidence demonstrating withheld materials, abandoned research lines, delays in receiving materials, and publication restrictions (pages 281-83).

My proposal requiring recipients of federal funding to agree to the provisions of the uniform biological MTA would lower transaction costs by increasing adherence to the model agreement. In addition, I would (as I discuss below) suggest model publication terms for transfers between university and industry.

Scott Kieff and the Unaddressed Commentary

Scott Kieff correctly points out that in the book, I do not specifically address the work of many important patent scholars such as Richard Epstein, Polk Wagner, John Duffy, and himself.

Just because a scholar does not appear in the book, however, does not mean that their work has not influenced me. My patent proposals, as discussed above, are modest in nature. Many patent scholars have advocated more aggressive sets of proposals. But several decisions I made in cabining my universe of patent proposals relied in part on the insights of the “property” scholars Scott references. Let me offer a few examples, referencing these scholars’ writings (and leaving aside for the moment the effects of recent changes in the law).

First, I did not offer a proposal on nonobviousness, as a reduced need for reform is apparent from the empirical studies by Polk Wagner and Chris Cotropia, together with Greg Mandel’s important work on hindsight bias.

Second, I did not address patentable subject matter, based in part on arguments such as those offered in the Wagner/Risch/Lemley amicus brief and Duffy brief in Bilski.

A third example comes from Scott’s own work. I had the pleasure of responding, at the 2008 George Mason/Microsoft conference on the Law and Economics of Innovation, to Scott’s work on the cumulative effect of recent changes in patent law. My consideration of this work played a role in the drafting of my chapter on eBay and patent remedies, in which I sought to ensure that the concept of patent trolls did not play a role in the construction of a relevant framework.

Finally, in my discussion of research tools in the biotechnology industry, I specifically rely (on page 263) on the empirical work of Joseph Straus (overview of his study available here), not to mention studies in Australia and Japan, along with several surveys of John Walsh and his coauthors, and nuanced explorations of the biotechnology industry such as those by David Adelman.

I could have explained these decisions in the book. Along similar lines, in the introduction to my patent section, I discussed why (as Dennis points out) Supreme Court and Federal Circuit opinions such as KSR, Medimmune, and Seagate reduced the need for certain proposals and why eBay reduced the ambition of my recommendation on patent remedies. In the end, however, with a book already weighing in at 400+ pages, it seemed reasonable not to address each of these points.

And that is one reason why I am grateful for this blog symposium, as well as Scott’s attention to the issue. For while the modesty of my patent proposals could allow one creatively to read between the lines to discern an indirect reliance on the property scholars, such discernment is far less direct than our discussion here.

Scott’s Uncertainty

Scott also points to the changes that patent law has experienced in recent years to ask whether too much uncertainty has been introduced into the system.

Scott and I debated this last year at the conference mentioned above. Scott is correct that the global effect of changes to various aspects of patent law—made in rapid succession and sometimes ambitiously—is difficult to ascertain. But particular changes, even though they may be more flexible than the previous law, might solve certain problems. See Seagate (prior to which willful infringement was alleged in, according to one study, 92 percent of patent cases), Merck v. Integra (in which the Supreme Court made clear that the statutory experimental use doctrine applied not just to drug products in clinical trials but also to those in preclinical studies), and Medimmune (which paved the way for cases like Teva v. Novartis, increasing the likelihood of declaratory judgment actions and opening the Hatch-Waxman bottleneck discussed above).
Do the patent proposals I offer increase uncertainty? Given that I aim to clarify the eBay framework and that I do not currently call for an expansion of the experimental use doctrine to cover research tools in the biotechnology industry, the issue would seem to devolve to my proposal for a post-grant opposition. For reasons I discuss, though, and assuming that challenges to invalid patents should play a role in any patent system, my proposed opposition would (as discussed above) appear superior to the alternatives.

There is still the counterargument, of course, that any challenge to patents could reduce innovation incentives. That is a difficult question to answer comprehensively, though (as I mention above) I incorporate mechanisms into my post-grant opposition proceeding that allow the regime to be calibrated to reduce the magnitude of any such effect.

Addressing a related issue, Scott asks “What is so precarious” about the state of affairs for biotechnology research tools “and why would a few lawsuits disrupt it?” The answer stems from the Federal Circuit’s restriction of the experimental use defense, which has led to an array of everyday conduct by lab researchers technically constituting infringement.

This state of affairs rests precariously on industry’s continuing to refrain from suing universities. In the book, I describe the symbiotic relationship between these two actors. My point, though, is that a few lawsuits could disrupt this fragile equilibrium. As I conclude (on page 267): “[I]f companies begin to sue universities, the dangers of stifled innovation would rise and the informal norms would be stripped away, laying bare the constricted state of the case law.”

Brett Frischmann and P2P Asymmetries

Turning to copyright law, one of my proposals addresses dual-use technologies, such as peer-to-peer (p2p) file-sharing software. These technologies can be utilized (1) to create revolutionary new forms of interaction and entertainment or (2) to facilitate widespread copyright infringement.

How, then, should copyright law treat these technologies? Should it consider the technology’s primary use? Determine whether it has a substantial noninfringing use? Examine its creator’s intent? Courts have considered these tests, among others, in applying copyright law to dual-use technologies.

In my proposal, I show how most of these tests threaten to stifle innovation. In his post, Brett Frischmann focuses on the reasons why, in particular, discussing the three asymmetries I develop.

First, I introduce an “innovation asymmetry,” which highlights why courts tend to overemphasize a technology’s infringing uses and underappreciate its noninfringing uses (pages 128-30). I contend that the costs of infringing uses can be quantified and are accentuated by the abundant evidence: because infringement has already occurred, plaintiffs need not speculate about future potential infringement. Surveys of downloaded works present tangible evidence of (often massive) copyright infringement to the court on a silver platter. Moreover, the costs are vivid in threatening the copyright industries’ business models. Finally, all of the tasks needed to demonstrate harms from copyright infringement can easily be undertaken by the recording and motion picture industries.

In contrast, noninfringing uses are less tangible. It is difficult to put a dollar figure on the benefits of enhanced communication and interaction. The uses also are more fully developed over time. When a new technology is introduced, no one, including the inventor, knows all of the beneficial uses to which it will eventually be put. I offer numerous examples of inventions for which nobody foresaw the eventual popular and revolutionary use (including, just to pick two, the telephone, which Alexander Graham Bell thought would be used primarily to broadcast the daily news, and the phonograph, which Thomas Edison thought would be used “to record the wishes of old men on their death beds”). Finally, I contend that the disappearance of noninfringing uses (along with the new technology) will not be lamented, as it would be less likely to disrupt settled expectations.

Second, I introduce an error-costs asymmetry (page 131). One type of error (a false positive or Type I error) occurs in the P2P setting when a court erroneously shuts down a technology. The other type (a false negative or Type II error) occurs when a court mistakenly upholds the technology even though it should have imposed liability. I contend that in the Type-II-error case, society can witness the effects of the technology. I argue that Congress can always step in to compensate copyright holders. Brett is correct that as a practical matter, such relief may not be immediately forthcoming. But copyright holders have had some success at getting Congress’s attention, so it at least is within the realm of possibility.

In contrast, in the Type-I-error case, consumers will not know what they are missing. Will, Brett asks, this hold true for every technology? Perhaps not. But if we know anything about innovation, it is that we don’t know a lot about the uses to which fledgling products will eventually be put. If the inventors themselves often cannot discern the ultimate use, we cannot be confident that courts can.

Finally, I unearth a litigation asymmetry (pages 131-33) that arises from the effect of the test on technology manufacturers. I contend that protracted litigation is expensive and favors those with deep pockets. In contrast, upstart dual-use manufacturers often lack the financial resources to wage lengthy legal battles. Given that some of the most revolutionary innovation comes from small inventors—such as the “upstarts who developed the first MP3 players” in the 1990s, which paved the way for the iPod—such consequences are severe. A legal standard that does not resolve the issue of secondary liability at an early stage of the proceedings will lead to “debilitating uncertainty” and exert a chilling effect on innovation. I recount several cases in which technology companies were forced into bankruptcy as a result of litigation.
As a concluding note, Brett suggests that I should have more fully engaged arguments about the benefits of requiring technology manufacturers to implement cheap, easy technological fixes. Given the importance of the issue, and my continuing engagement with these ideas, I will certainly take Brett up on his invitation in future work.

For our purposes here, let me just explain my concern (as I discuss on pages 137-38) that such determinations introduce complexity and eliminate early disposition of a case. For example, litigation over which fingerprinting system to adopt presents a nuanced factual question and forces judges to grapple with intractable issues about the sufficiency of various solutions. In Napster, even though the company examined dozens of audio fingerprinting systems and installed one that “was able to prevent sharing of much of plaintiffs’ noticed copyrighted works,” the court demanded “zero tolerance” and shut down Napster.

More broadly, I explain that feasibility questions could “enmesh courts in disputes comparable to those that have bedeviled design defect litigation in products liability.” For in cases involving manufacturing flaws, courts can compare a product to the manufacturer’s standards. In contrast, there is no objective standard of comparison for design defects since the product is used in its intended condition. Courts lacking a benchmark could be tempted to find that defendants failed to do enough.

Statutory Damages and the DMCA

I explore copyright’s effects on innovation not only in the realm of liability standards but also in the context of remedies. The copyright laws give owners, in the case of willful infringement, the ability to recover damages as high as $150,000 per infringing work. In the context of dual-use technologies, which could involve thousands of copyrighted works, potential damages could reach into the billions of dollars. For that reason, and because such liability is not consistent with the drafters’ intentions regarding statutory damages, I recommend eliminating the remedy for technology manufacturers.

Brett notes that my discussion of this issue is “probably the least controversial in the book.” If only Congress would see it that way. In May 2008, the House of Representatives passed the Prioritizing Resources and Organization for Intellectual Property (PRO IP) Act of 2008, which was designed to increase IP enforcement. One provision in an earlier version of the legislation would have increased damages, allowing copyright owners to obtain “multiple awards of statutory damages” for the infringement of compilations. In other words, owners could have sought statutory damages for each song on a CD or each article and photograph in a magazine. This provision was ultimately removed from the bill due to opposition and copyright owners’ inability to offer any examples of inadequate compensation. The incident nonetheless demonstrates that my proposal might be a little more controversial in the halls of Congress than one might imagine.

My final copyright proposal addresses the Digital Millennium Copyright Act (DMCA). This Act has expanded beyond its drafters’ intentions in covering functional devices that contain small pieces of software. As a result, owners have prevented interoperability in alarming situations that involve printer toner cartridges and garage door openers. Anyone who has paid an exorbitant price for replacement inkjet printer cartridges (which cost more, per milliliter, than Dom Perignon champagne) knows the power of such control. I thus offer a proposal that limits the Act to the creative works the drafters envisioned.

Geoff’s “Two Books” Critique

We begin our final sections where we began this post—with several critiques from Geoff. The first is that this is really two books rolled into one. Geoff is right that Part I provides information of a more basic nature. My first five chapters offer a background on the IP and antitrust regimes, with an emphasis on the intersection of the two regimes, as well as innovation.

As should be apparent by now, Parts II (copyright), III (patent), and IV (antitrust) are more sophisticated, plunging into ongoing, cutting-edge debates and offering ten proposals designed to foster innovation. The two parts are related, however. Part I is designed to provide the reader with the tools needed to understand the proposals.

Geoff’s “Insufficient Support” Critique

Geoff levels perhaps his most fundamental critique (repeated by Dennis) that I “canvass[] both sides of some pretty heated debates,” state that these “are matters about which we are profoundly uncertain,” and “with what seems . . . to be little support . . . then choose[] sides.” I respectfully disagree. Rather than debating on a lofty plane, let’s take a look at the book’s goals, followed by the proposals.

The book’s goal, in a nutshell, is to foster a greater appreciation for innovation among the patent, copyright, and antitrust regimes. To be sure, the regimes come to the innovation beachhead in different guises. The patent system comes under scrutiny and with numerous fix-it books and proposals in tow. The antitrust system sails with the wind at its back, having cleaned up much of its innovation act in the past generation. And the copyright regime swims upstream, with recent developments pushing it further from the innovation shore.

One of the central difficulties, which will not be news to any readers, is that innovation’s importance is matched by its difficulty of measurement. Partly as a result, antitrust courts have historically focused on the more measurable indicator of price. And copyright courts have emphasized the more observable effect of infringement.

The project of this book is to put innovation front and center. As I conclude on page 2: “The difficulty of measuring innovation does not mean it should be ignored. It only means, given its importance, that we need to redouble our efforts to account for it.”

Conceiving the project in these terms explains why I crafted chapters the way I did. In the copyright arena, for example, the law of secondary liability has amassed new tests, bells, and whistles in the past decade. My book is not intended to trod the well-worn path of showing how this latest judicial treatment, as revealed in cases such as Napster, Aimster, and Grokster, smartly updates the older “VCR” test for a modern P2P era. There is plenty of that commentary already.

Instead, what I seek to do is offer the strongest possible manifesto for innovation in this setting. Without repeating the 40 pages of my chapter on P2P and other dual use technologies, I argue for a return to the Sony standard, which defers to technologies as long as they are “capable of substantial non-infringing uses.” My chapter develops numerous arguments that reveal the consequences of insufficiently appreciating innovation. Others, including Fred von Lohmann and Tony Reese, have voiced some of these arguments before. But I seek to expand these arguments in a chapter that

• (1) explores the creativity-innovation tradeoff,
• (2) introduces the innovation asymmetry,
• (3) develops the error-costs asymmetry,
• (4) unearths the litigation asymmetry,
• (5) analyzes P2P’s benefits in distribution, promotion, and fostering the “long tail,” and
• (6) explores the tip of the innovation iceberg (which considers P2P’s future roles in offering a potential antidote to cloud computing and Google’s search engine).

Addressing Geoff’s critique head on, does this consider two sides of the issue before superficially selecting one? I don’t think so. Again, I am offering the strongest argument for the incorporation of innovation into copyright’s secondary liability analysis, where it is currently absent. As I explore above, my three asymmetries present new arguments supporting innovation in this setting.

But what about my attempt to address the creativity-innovation tradeoff? The tradeoff arises since copyright infringement could harm creativity while attempts to punish intermediaries could stifle innovation. In my book, I address this tradeoff in the setting of P2P and CD sales. I conclude that innovation is far more directly affected by the test selected than creativity. As I explain on pages 120-28, this point is supported by the findings that (1) there are numerous reasons why CD sales have declined in recent years, (2) copyright holders have many potential remedies other than targeting P2P networks, (3) individual artists play a crucial role in creativity, and (4) innovation can create new markets and models for copyrighted works.

Perhaps there is an economic model that could more definitively resolve the creativity-innovation tradeoff. But if there is, I haven’t seen it. And I would be surprised if such a universal framework of apples and oranges were available.

Space prevents me from exploring each of my proposals in this level of detail. But before moving on, let me make one other point.
One of the tools I use in several chapters involves an exploration of the legislative history. In the settings in which I enlist them, we find the histories covered in dust, not employed as useful guides to an appropriate analysis. My analysis of the histories reminds us how the drafters of the DMCA targeted pirates, not household devices; how statutory damages were designed to assure adequate compensation, not stifle investment and innovation; and how the Hatch-Waxman Act encouraged generic competition, not settlements prohibiting patent challenges. Where courts have gone astray, these unexploited tools offer significant benefits.

Did I Erroneously Omit Error Costs and Other Pragmatic Considerations?

Geoff’s final general point is that “there is almost no discussion of error costs in the book—no discussion of bureaucratic agency issues, judicial process problems, public choice problems, and the like.” Considered a bit more broadly, however, error costs and pragmatic considerations appear in most of the book’s proposals.

For starters, I show how error costs support presumptive illegality for drug settlements (pages 370-71). And, as discussed above, I reveal how error costs play an essential role in my P2P chapter (page 131).

Pragmatic concerns underlie my other proposals as well. Stated briefly, my recommendations for statutory damages and the eBay framework for patent relief are designed to be simple enough to be easily applied by courts. Congress can enact a post-grant opposition system similar to the one I propose, and in fact is considering one as I write.

Finally, universities can adopt MTAs, which have often accompanied the transfer of materials to researchers. Business realities were front and center in my crafting of this proposal. I concluded that academia and industry were more likely to agree on model publication terms (which prohibit delay in publishing research findings) than on reach-through licenses that reserve rights to materials owners (and cannot realistically be restricted when firms believe their “crown jewels” are at stake). As I conclude (on page 289): “if firms assert that reach-through provisions are needed because of a specific material’s importance, it would be counterproductive to second-guess the decision and demonstrate the superiority of adherence to the UBMTA.”

$65!

Dennis makes perhaps the most relevant observation of all when he laments the book’s $65 price tag. As the author of a $65 book trying to sell copies, I must concede that his concern has crossed my mind on more than one occasion.
Why is the price so high? Because Oxford divides its collection into academic books and trade books. Academic books often wind up on library shelves and carry a higher price tag. (If only a few hundred libraries, and scattered others, will purchase the book, then at least the publisher can maximize what it gets from this group). I wrote the book, however, seeking to deliver it into the hands of more than a few librarians putting the book on a shelf.

In any event, with today’s 20 percent off price at Amazon (sorry – that was a bit direct), the $52 tag is a little less painful on the wallet.

Conclusion

I have been honored to have my book serve as the centerpiece of the first blog symposium at TOTM, and to be part of the cross-posting at PatentlyO. As I mentioned at the start of my post, it is not every day that a scholar can have their work poked, prodded, and improved from every conceivable angle, and that is what Brett, Dan, Dennis, Geoff, Josh, Phil, and Scott have done. I am grateful for their thoughtful comments on what appears in the book, as well as their pushing me to expand on issues that the book did not fully develop.

Most important, I would like to thank you, the readers of TOTM and PatentlyO, who have taken time out of your busy schedules and usual diets of breaking antitrust and patent developments to spend some time getting to know the book. I am grateful for your attention, and look forward to continuing the conversation in the comments or elsewhere.