Archives For SSRN

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

[Cross posted at the Center for the Protection of Intellectual Property blog.]

Today’s public policy debates frame copyright policy solely in terms of a “trade off” between the benefits of incentivizing new works and the social deadweight losses imposed by the access restrictions imposed by these (temporary) “monopolies.” I recently posted to SSRN a new research paper, called How Copyright Drives Innovation in Scholarly Publishing, explaining that this is a fundamental mistake that has distorted the policy debates about scholarly publishing.

This policy mistake is important because it has lead commentators and decision-makers to dismiss as irrelevant to copyright policy the investments by scholarly publishers of $100s of millions in creating innovative distribution mechanisms in our new digital world. These substantial sunk costs are in addition to the $100s of millions expended annually by publishers in creating, publishing and maintaining reliable, high-quality, standardized articles distributed each year in a wide-ranging variety of academic disciplines and fields of research. The articles now number in the millions themselves; in 2009, for instance, over 2,000 publishers issued almost 1.5 million articles just in the scientific, technical and medical fields, exclusive of the humanities and social sciences.

The mistaken incentive-to-invent conventional wisdom in copyright policy is further compounded by widespread misinformation today about the allegedly “zero cost” of digital publication. As a result, many people are simply unaware of the substantial investments in infrastructure, skilled labor and other resources required to create, publish and maintain scholarly articles on the Internet and in other digital platforms.

This is not merely a so-called “academic debate” about copyright policy and publishing.

The policy distortion caused by the narrow, reductionist incentive-to-create conventional wisdom, when combined with the misinformation about the economics of digital business models, has been spurring calls for “open access” mandates for scholarly research, such as at the National Institute of Health and in recently proposed legislation (FASTR Act) and in other proposed regulations. This policy distortion even influenced Justice Breyer’s opinion in the recent decision in Kirtsaeng v. John Wiley & Sons (U.S. Supreme Court, March 19, 2013), as he blithely dismissed commercial incentivizes as being irrelevant to fundamental copyright policy. These legal initiatives and the Kirtsaeng decision are motivated in various ways by the incentive-to-create conventional wisdom, by the misunderstanding of the economics of scholarly publishing, and by anti-copyright rhetoric on both the left and right, all of which has become more pervasive in recent years.

But, as I explain in my paper, courts and commentators have long recognized that incentivizing authors to produce new works is not the sole justification for copyright—copyright also incentivizes intermediaries like scholarly publishers to invest in and create innovative legal and market mechanisms for publishing and distributing articles that report on scholarly research. These two policies—the incentive to create and the incentive to commercialize—are interrelated, as both are necessary in justifying how copyright law secures the dynamic innovation that makes possible the “progress of science.” In short, if the law does not secure the fruits of labors of publishers who create legal and market mechanisms for disseminating works, then authors’ labors will go unrewarded as well.

As Justice Sandra Day O’Connor famously observed in the 1984 decision in Harper & Row v. Nation Enterprises: “In our haste to disseminate news, it should not be forgotten the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” Thus, in Harper & Row, the Supreme Court reached the uncontroversial conclusion that copyright secures the fruits of productive labors “where an author and publisher have invested extensive resources in creating an original work.” (emphases added)

This concern with commercial incentives in copyright law is not just theory; in fact, it is most salient in scholarly publishing because researchers are not motivated by the pecuniary benefits offered to authors in conventional publishing contexts. As a result of the policy distortion caused by the incentive-to-create conventional wisdom, some academics and scholars now view scholarly publishing by commercial firms who own the copyrights in the articles as “a form of censorship.” Yet, as courts have observed: “It is not surprising that [scholarly] authors favor liberal photocopying . . . . But the authors have not risked their capital to achieve dissemination. The publishers have.” As economics professor Mark McCabe observed (somewhat sardonically) in a research paper released last year for the National Academy of Sciences: he and his fellow academic “economists knew the value of their journals, but not their prices.”

The widespread ignorance among the public, academics and commentators about the economics of scholarly publishing in the Internet age is quite profound relative to the actual numbers.  Based on interviews with six different scholarly publishers—Reed Elsevier, Wiley, SAGE, the New England Journal of Medicine, the American Chemical Society, and the American Institute of Physics—my research paper details for the first time ever in a publication and at great length the necessary transaction costs incurred by any successful publishing enterprise in the Internet age.  To take but one small example from my research paper: Reed Elsevier began developing its online publishing platform in 1995, a scant two years after the advent of the World Wide Web, and its sunk costs in creating this first publishing platform and then digitally archiving its previously published content was over $75 million. Other scholarly publishers report similarly high costs in both absolute and relative terms.

Given the widespread misunderstandings of the economics of Internet-based business models, it bears noting that such high costs are not unique to scholarly publishers.  Microsoft reportedly spent $10 billion developing Windows Vista before it sold a single copy, of which it ultimately did not sell many at all. Google regularly invests $100s of millions, such as $890 million in the first quarter of 2011, in upgrading its data centers.  It is somewhat surprising that such things still have to be pointed out a scant decade after the bursting of the dot.com bubble, a bubble precipitated by exactly the same mistaken view that businesses have somehow been “liberated” from the economic realities of cost by the Internet.

Just as with the extensive infrastructure and staffing costs, the actual costs incurred by publishers in operating the peer review system for their scholarly journals are also widely misunderstood.  Individual publishers now receive hundreds of thousands—the large scholarly publisher, Reed Elsevier, receives more than one million—manuscripts per year. Reed Elsevier’s annual budget for operating its peer review system is over $100 million, which reflects the full scope of staffing, infrastructure, and other transaction costs inherent in operating a quality-control system that rejects 65% of the submitted manuscripts. Reed Elsevier’s budget for its peer review system is consistent with industry-wide studies that have reported that the peer review system costs approximately $2.9 billion annually in operation costs (translating into dollars the British £1.9 billion pounds reported in the study). For those articles accepted for publication, there are additional, extensive production costs, and then there are extensive post-publication costs in updating hypertext links of citations, cyber security of the websites, and related digital issues.

In sum, many people mistakenly believe that scholarly publishers are no longer necessary because the Internet has made moot all such intermediaries of traditional brick-and-mortar economies—a viewpoint reinforced by the equally mistaken incentive-to-create conventional wisdom in the copyright policy debates today. But intermediaries like scholarly publishers face the exact same incentive problems that is universally recognized for authors by the incentive-to-create conventional wisdom: no will make the necessary investments to create a work or to distribute if the fruits of their labors are not secured to them. This basic economic fact—dynamic development of innovative distribution mechanisms require substantial investment in both people and resources—is what makes commercialization an essential feature of both copyright policy and law (and of all intellectual property doctrines).

It is for this reason that copyright law has long promoted and secured the value that academics and scholars have come to depend on in their journal articles—reliable, high-quality, standardized, networked, and accessible research that meets the differing expectations of readers in a variety of fields of scholarly research. This is the value created by the scholarly publishers. Scholarly publishers thus serve an essential function in copyright law by making the investments in and creating the innovative distribution mechanisms that fulfill the constitutional goal of copyright to advance the “progress of science.”

DISCLOSURE: The paper summarized in this blog posting was supported separately by a Leonardo Da Vinci Fellowship and by the Association of American Publishers (AAP). The author thanks Mark Schultz for very helpful comments on earlier drafts, and the AAP for providing invaluable introductions to the five scholarly publishers who shared their publishing data with him.

NOTE: Some small copy-edits were made to this blog posting.

 

Thank you to Josh for inviting me to guest blog on Truth on the Market.  As my first blog posting, I thought TOTM readers would enjoy reading about my latest paper that I posted to SSRN, which has been getting some attention in the blogosphere (see here and here).  It’s a short, 17-page essay — see, it is possible that law professors can write short articles — called, The Trespass Fallacy in Patent Law.

This essay responds to the widely-heard cries today that the patent system is broken, as expressed in the popular press and by tech commentators, legal academics, lawyers, judges, congresspersons and just about everyone else.  The $1 billion verdict issued this past Friday against Samsung in Apple’s patent infringement lawsuit, hasn’t changed anything. (If anything, Judge Richard Posner finds the whole “smart phone war” to be Exhibit One in the indisputable case that the patent system is broken.)

Although there are many reasons why people think the patent system is systemically broken, one common refrain is that patents fail as property rights because patent infringement doctrine is not as clear, determinate and efficient as trespass doctrine is for real estate. Thus, the explicit standard that is invoked to justify why we must fix patent boundaries — or the patent system more generally — is that the patent system does not work as clearly and efficiently as fences and trespass doctrine do in real property. As Michael Meurer and James Bessen explicitly state in their book, Patent Failure: “An ideal patent system features rights that are defined as clearly as the fence around a piece of land.”

My essay explains that this is a fallacious argument, suffering both empirical and logical failings. Empirically, there are no formal studies of how trespass functions in litigation; thus, complaints about the patent system’s indeterminacy are based solely on an idealized theory of how trespass should function.  Often times, patent scholars, like my colleague, T.J. Chiang, just simply assert without any supporting evidence whatsoever that fences are “crystal clear” and thus there are “stable boundaries” for real estate; T.J. thus concludes that the patent system is working inefficiently and needs to be reformed (as captured in the very title of his article, Fixing Patent Boundaries). The variability in patent claim construction, asserts T.J. is tantamount to “the fence on your land . . . constantly moving in random directions. . . . Because patent claims are easily changed, they serve as poor boundaries, undermining the patent system for everyone.”

Other times, this idealized theory about trespass is given some credence by appeals to loose impressions or a gestalt of how trespass works, or there are appeals to anecdotes and personal stories about how well trespass functions in the real world. Bessen and Meurer do this in their book, Patent Failure, where they back up their claim that trespass is clear with a search they apparently did on Westlaw of innocent trespass cases in California in a 3-year period. Either way, assertions backed by intuitions or a few anecdotal cases cannot serve as an empirical standard by which one makes a systemic evaluation that we should shift to anther institutional arrangement because the current one is operating inefficiently. In short, the trespass standard represents the nirvana fallacy.

Even more important, anecdotal evidence and related studies suggest that trespass and other boundary disputes between landowners are neither as clear nor as determinate as patent scholars assume them to be (something I briefly summarize on in my essay and call for more empirical studies to be done).

Logically, the comparison of patent boundaries to trespass commits what philosophers would call a category mistake. It conflates the boundaries of an entire legal right (a patent), not with the boundaries of its conceptual counterpart (real estate), but rather with a single doctrine (trespass) that secures real estate only in a single dimension (geographic boundaries). As all 1Ls learn in their Property courses, real estate is not land. Accordingly, estate boundaries are defined along the dimensions of time, use and space, as represented in myriad doctrines like easements, nuisance, restrictive covenants, and future interests, among others. In fact, the overlapping possessory and use rights shared by owners of joint tenancies or by owners of possessory estates with overlapping future interests share many conceptual and doctrinal similarities to the overlapping rights that patent-owners may have over a single product in the marketplace (like a smart phone).  In short, the proper conceptual analog for patent boundaries is estate boundaries, not fences.

In sum, the trespass fallacy is driving an indeterminacy critique in patent law that is both empirically unverified and conceptually misleading, and check out my essay for much more evidence and more in-depth explanation of why this is the case.

HT: Danny Sokol.

TOP 10 Papers for Journal of Antitrust: Antitrust Law & Policy eJournal June 4, 2012 to August 3, 2012.

Rank Downloads Paper Title
1 244 The Antitrust/Consumer Protection Paradox: Two Policies at War with Each Other 
Joshua D. Wright,
George Mason University – School of Law, Faculty,
Date posted to database: May 31, 2012
Last Revised: May 31, 2012
2 237 Cartels, Corporate Compliance and What Practitioners Really Think About Enforcement 
D. Daniel Sokol,
University of Florida – Levin College of Law,
Date posted to database: June 7, 2012
Last Revised: July 16, 2012
3 175 The Implications of Behavioral Antitrust 
Maurice E. Stucke,
University of Tennessee College of Law,
Date posted to database: July 17, 2012
Last Revised: July 17, 2012
4 167 The Oral Hearing in Competition Proceedings Before the European Commission 
Wouter P. J. WilsWouter P. J. Wils,
European Commission, University of London – School of Law,
Date posted to database: May 3, 2012
Last Revised: June 18, 2012
5 141 Citizen Petitions: An Empirical Study 
Michael A. CarrierDaryl Wander,
Rutgers University School of Law – Camden, Unaffiliated Authors - affiliation not provided to SSRN,
Date posted to database: June 4, 2012
Last Revised: June 4, 2012
6 138 The Role of the Hearing Officer in Competition Proceedings Before the European Commission 
Wouter P. J. WilsWouter P. J. Wils,
European Commission, University of London – School of Law,
Date posted to database: May 3, 2012
Last Revised: May 7, 2012
7 90 Google, in the Aftermath of Microsoft and Intel: The Right Approach to Antitrust Enforcement in Innovative High Tech Platform Markets? 
Fernando Diez,
University of Antonio de Nebrija,
Date posted to database: June 12, 2012
Last Revised: June 26, 2012
8 140 Dynamic Analysis and the Limits of Antitrust Institutions 
Douglas H. GinsburgJoshua D. Wright,
U.S. Court of Appeals for the District of Columbia, George Mason University – School of Law, Faculty,
Date posted to database: June 14, 2012
Last Revised: June 17, 2012
9 114 Optimal Antitrust Remedies: A Synthesis 
William H. Page,
University of Florida – Fredric G. Levin College of Law,
Date posted to database: May 17, 2012
Last Revised: July 29, 2012
10 111 An Economic Analysis of the AT&T-T-Mobile USA Wireless Merger 
Stanley M. BesenStephen KletterSerge MoresiSteven C. Salopjohn woodbury,
Charles River Associates (CRA), Charles River Associates (CRA), Charles River Associates (CRA), Georgetown University Law Center, Charles River Associates (CRA),
Date posted to database: April 25, 2012
Last Revised: April 25, 2012

TOTM friend Stephen Bainbridge is editing a new book on insider trading.  He kindly invited me to contribute a chapter, which I’ve now posted to SSRN (download here).  In the chapter, I consider whether a disclosure-based approach might be the best way to regulate insider trading.

As law and economics scholars have long recognized, informed stock trading may create both harms and benefits to society With respect to harms, defenders of insider trading restrictions have maintained that informed stock trading is “unfair” to uninformed traders and causes social welfare losses by (1) encouraging deliberate mismanagement or disclosure delays aimed at generating trading profits; (2) infringing corporations’ informational property rights, thereby discouraging the production of valuable information; and (3) reducing trading efficiency by increasing the “bid-ask” spread demanded by stock specialists, who systematically lose on trades with insiders.

Proponents of insider trading liberalization have downplayed these harms.  With respect to the fairness argument, they contend that insider trading cannot be “unfair” to investors who know in advance that it might occur and nonetheless choose to trade.  And the purported efficiency losses occasioned by insider trading, liberalization proponents say, are overblown.  There is little actual evidence that insider trading reduces liquidity by discouraging individuals from investing in the stock market, and it might actually increase such liquidity by providing benefits to investors in equities.  With respect to the claim that insider trading creates incentives for delayed disclosures and value-reducing management decisions, advocates of deregulation claim that such mismanagement is unlikely for several reasons.  First, managers face reputational constraints that will discourage such misbehavior.  In addition, managers, who generally work in teams, cannot engage in value-destroying mismanagement without persuading their colleagues to go along with the strategy, which implies that any particular employee’s ability to engage in mismanagement will be constrained by her colleagues’ attempts to maximize firm value or to gain personally by exposing proposed mismanagement.  With respect to the property rights concern, deregulation proponents contend that, even if material nonpublic information is worthy of property protection, the property right need not be a non-transferable interest granted to the corporation; efficiency considerations may call for the right to be transferable and/or initially allocated to a different party (e.g., to insiders).  Finally, legalization proponents observe that there is little empirical evidence to support the concern that insider trading increases bid-ask spreads.

Turning to their affirmative case, proponents of insider trading legalization (beginning with Geoff’s dad, Henry Manne) have primarily emphasized two potential benefits of the practice.  First, they observe that insider trading increases stock market efficiency (i.e., the degree to which stock prices reflect true value), which in turn facilitates efficient resource allocation among capital providers and enhances managerial decision-making by reducing agency costs resulting from overvalued equity.  In addition, the right to engage in insider trading may constitute an efficient form of managerial compensation.

Not surprisingly, proponents of insider trading restrictions have taken issue with both of these purported benefits. With respect to the argument that insider trading leads to more efficient securities prices, ban proponents retort that trading by insiders conveys information only to the extent it is revealed, and even then the message it conveys is “noisy” or ambiguous, given that insiders may trade for a variety of reasons, many of which are unrelated to their possession of inside information.  Defenders of restrictions further maintain that insider trading is an inefficient, clumsy, and possibly perverse compensation mechanism.

The one thing that is clear in all this is that insider trading is a “mixed bag”  Sometimes such trading threatens to harm social welfare, as in SEC v. Texas Gulf Sulphur, where informed trading threatened to prevent a corporation from usurping a valuable opportunity.  But sometimes such trading creates net social benefits, as in Dirks v. SEC, where the trading revealed massive corporate fraud.

As regular TOTM readers will know, optimal regulation of “mixed bag” business practices (which are all over the place in the antitrust world) requires consideration of the costs of underdeterring “bad” conduct and of overdeterring “good” conduct.  Collectively, these constitute a rule’s “error costs.”  Policy makers should also consider the cost of administering the rule at issue; as they increase the complexity of the rule to reduce error costs, they may unwittingly drive up “decision costs” for adjudicators and business planners.  The goal of the policy maker addressing a mixed bag practice, then, should be to craft a rule that minimizes the sum of error and decision costs.

Adjudged under that criterion, the currently prevailing “fraud-based” rules on insider trading fail.  They are difficult to administer, and they occasion significant error cost by deterring many instances of socially desirable insider trading.  The more restrictive “equality of information-based” approach apparently favored by regulators fares even worse.  A contractarian, laissez-faire approach favored by many law and economics scholars would represent an improvement over the status quo, but that approach, too, may be suboptimal, for it does nothing to bolster the benefits or reduce the harms associated with insider trading.

My new book chapter proposes a disclosure-based approach that would help reduce the sum of error and decision costs resulting from insider trading and its regulation.  Under the proposed approach, authorized informed trading would be permitted as long as the trader first disclosed to a centralized, searchable database her insider status, the fact that she was trading on the basis of material, nonpublic in­formation, and the nature of her trade.  Such an approach would (1) enhance the market efficiency benefits of insider trading by facilitating “trade decod­ing,” while (2) reducing potential costs stemming from deliberate misman­agement, disclosure delays, and infringement of informational property rights.  By “accentuating the positive” and “eliminating the negative” conse­quences of informed trading, the proposed approach would perform better than the legal status quo and the leading proposed regulatory alternatives at minimizing the sum of error and decision costs resulting from insider trading restrictions.

Please download the paper and send me any thoughts.

I’ve posted a new project in progress (co-authored with Angela Diveley) to SSRN.  In “Do Expert Agencies Outperform Generalist Judges?”, we attempt to examine the relative performance FTC Commissioners and generalist Article III federal court judges in antitrust cases and find some evidence undermining the oft-invoked assumption that Commission expertise leads to superior performance in adjudicatory decision-making.  Here is the abstract:

In the context of U.S. antitrust law, many commentators have recently called for an expansion of the Federal Trade Commission’s adjudicatory decision-making authority pursuant to Section 5 of the FTC Act, increased rulemaking, and carving out exceptions for the agency from increased burdens of production facing private plaintiffs. These claims are often expressly grounded in the assertion that expert agencies generate higher quality decisions than federal district court judges. We call this assertion the expertise hypothesis and attempt to test it. The relevant question is whether the expert inputs available to generalist federal district court judges translate to higher quality outputs and better performance than the Commission produces in its role as an adjudicatory decision-maker. While many appear to assume agencies have courts beat on this margin, to our knowledge, this oft-cited reason to increase the discretion of agencies and the deference afforded them by reviewing courts is void of empirical support. Contrary to the expertise hypothesis, we find evidence suggesting the Commission does not perform as well as generalist judges in its adjudicatory antitrust decision-making role. Furthermore, while the available evidence is more limited, there is no clear evidence the Commission adds significant incremental value to the ALJ decisions it reviews. In light of these findings, we conclude there is little empirical basis for the various proposals to expand agency authority and deference to agency decisions. More generally, our results highlight the need for research on the relationship between institutional design and agency expertise in the antitrust context.

We are in the progress of expanding the analysis and, as always, comments welcome here or at my email address on the sidebar.

In recent years, antitrust scholars have largely agreed on a couple of propositions involving tying and bundled discounting. With respect to tying (selling one’s monopoly “tying” product only on the condition that buyers also purchase another “tied” product), scholars from both the Chicago and Harvard Schools of antitrust analysis have generally concluded that there should be no antitrust liability unless the tie-in results in substantial foreclosure of marketing opportunities in the tied product market. Absent such foreclosure, scholars have reasoned, truly anticompetitive harm is unlikely to occur. The prevailing liability rule, however, condemns tie-ins without regard to whether they occasion substantial tied market foreclosure.

With respect to bundled discounting (selling a package of products for less than the aggregate price of the products if purchased separately), scholars have generally concluded that there should be no antitrust liability if the discount at issue could be matched by an equally efficient single-product rival of the discounter. That will be the case if each product in the bundle is priced above cost after the entire bundled discount is attributed to that product. Antitrust scholars have therefore generally endorsed a safe harbor for bundled discounts that are “above cost” under a “discount attribution test.”

In an article appearing in the December 2009 Harvard Law Review, Harvard law professor Einer Elhauge challenged each of these near-consensus propositions. According to Elhauge, the conclusion that significant tied market foreclosure should be a prerequisite to tying liability stems from scholars’ naïve acceptance of the Chicago School’s “single monopoly profit” theory. Elhauge insists that the theory is infirm and that instances of tying may occasion anticompetitive “power” (i.e., price discrimination) effects even if they do not involve substantial tied market foreclosure. He maintains that the Supreme Court has deemed such effects to be anticompetitive and that it was right to do so.

With respect to bundled discounting, Elhauge calls for courts to forego price-cost comparisons in favor of a rule that asks whether the defendant seller has “coerced” consumers into buying the bundle by first raising its unbundled monopoly (“linking”) product price above the “but-for” level that would prevail absent the bundled discounting scheme and then offering a discount from that inflated level.

I have just posted to SSRN an article criticizing Elhauge’s conclusions on both tying and bundled discounting. On tying, the article argues, Elhauge makes both descriptive and normative mistakes. As a descriptive matter, Supreme Court precedent does not deem the so-called power effects (each of which was well-known to Chicago School scholars) to be anticompetitive. As a normative matter, such effects should not be regulated because they tend to enhance total social welfare, especially when one accounts for dynamic efficiency effects. Because tying can create truly anticompetitive effect only when it involves substantial tied market foreclosure, such foreclosure should be a prerequisite to liability.

On bundled discounting, I argue, Elhauge’s proposed rule would be a disaster. The rule fails to account for the fact that bundled discounts may create immediate consumer benefit even if the seller has increased unbundled linking prices above but-for levels. It is utterly inadministrable and would chill procompetitive instances of bundled discounting. It is motivated by a desire to prevent “power” effects that are not anticompetitive under governing Supreme Court precedent (and should not be deemed so). Accordingly, courts should reject Elhauge’s proposed rule in favor of an approach that first focuses on the genuine prerequisite to discount-induced anticompetitive harm—“linked” market foreclosure—and then asks whether any such foreclosure is anticompetitive in that it could not be avoided by a determined competitive rival. To implement such a rule, courts would need to apply the discount attribution test.

The paper is a work-in-progress. Herbert Hovenkamp has already given me a number of helpful comments, which I plan to incorporate shortly. In the meantime, I’d love to hear what TOTM readers think.

Bruce Kobayashi and I have posted our forthcoming chapter, Intellectual Property and Standard Setting,  in the forthcoming ABA Antitrust Section Handbook on the Antitrust Aspects of Standard Setting.  It offers an analytical overview of the antitrust issues involving intellectual property and standard setting including, but not limited to, patent holdup, royalty stacking, refusals to license, and patent pools.

Kobayashi and Wright (2010) offers largely positive analysis of the antitrust issues in this area.  For readers interested in a more normative perspective making the case for an implied preemption of antitrust in the area of patent holdup in favor of state contract law and federal patent law remedies, see Kobayashi and Wright (2009).

Dan Crane has an excellent essay (“Chicago, Post-Chicago and Neo-Chicago“) reviewing Bob Pitofsky’s Overshot the Mark volume.  Here’s Dan’s brief abstract:

This essay reviews Bob Pitofsky’s 2008 essay compilation, How Chicago Overshot the Mark: The Effect of Conservative Economic Analysis on U.S. Antitrust. The essay critically evaluates the book’s rough handling of the Chicago School and suggests a path forward for a Neo-Chicago approach to antitrust analysis.

Readers of the blog will sense similar themes from Crane’s essay and my own review of the Pitofsky volume, Overshot the Mark? A Simple Explanation of the Chicago School’s Influence on Antitrust.

Dan and I both criticize the volume for overplaying its hand on two key points: (1) that the Chicago School’s dominance in antitrust has been a function more of right-wing political ideology than economics, and (2) that the Post-Chicagoans can claim superior predictive power over Chicago models with respect to the existing empirical evidence.  What interests me most about Dan’s excellent work is that we’ve come to very similar conclusions about a “path forward” for antitrust out of the Post-Chicago v. Chicago debates which have become largely political and lost economic meaning to most using the term.  Our conclusions each embrace an empirically motivated style of analysis that is sensitive to error costs.

Dan identifies the “Neo-Chicago School”, a term coined by David Evans and Jorge Padilla, as the optimal “third way.”  Basically, the Neo-Chicago school is the combination of price theory, empiricism and the error-cost framework to inform the design of antitrust liability rules.  The new addition to the Neo-Chicago label is the addition of the error-cost framework.  As I’ve written elsewhere, while I consider myself a subscriber to the Neo-Chicago approach, I’m not too convinced there is anything “Neo” about it.  Here’s my mathematical proof of this proposition:

Neo-Chicago = Chicago + Error Cost Framework

Neo-Chicago = Chicago + Intellectual creation of Frank Easterbrook

Neo-Chicago = Chicago + Chicago

Neo-Chicago = 2*Chicago

It’s trivial to demonstrate then that Neo-Chicago is really just a double dose of the Chicago School.  QED.

I’m not sure what that means, but there is a more serious point underlying all of this that goes beyond semantics.  I think that Dan and Evans & Padilla both have it right that this theory + empiricism + error-cost framework is the most intellectually powerful approach to generating a coherent approach to antitrust based on the best available theory and evidence.  In my recent work, including the Pitofsky book review linked above, I’ve calling this approach “evidence based antitrust” largely to avoid the whole Chicago v. Post-Chicago debate which has become so loaded that folks often use it as an excuse to say unreasonable or simply incorrect things.

As I articulate the “evidence based antitrust” approach, it too is the combination of the best available theory + empirical evidence + the error-cost framework.   It may have the advantage of avoiding some of the political rhetoric that has become increasingly mainstream in these debates and shift our attention to the existing body of empirical evidence.  But these things are very hard to predict.  And of course, to the extent that a greater fraction of the antitrust debates nowadays are taking places in Congress, sensitivity to the existing empirics might be less likely.

Dan’s essay is highly recommended.  Go read it.  And look for some work from Crane and Wright in the future in a joint paper applying this sort of framework to bundled discounts.

I’ve posted to SSRN a new essay entitled Overshot the Mark?  A Simple Explanation of the Chicago School’s Influence on Antitrust.  It is a book review of Robert Pitofsky’s recent volume How the Chicago School Overshot the Mark: The effect of Conservative Economic Analysis on U.S. Antitrust, and is forthcoming in Volume 5 of Competition Policy International.

The book review is a critical review of the Post-Chicago antitrust agenda adopted by many of the volume’s authors, and a defense of what the editors describe as conservative economics (but seem to mean Chicago School), from an empirical, evidence-based perspective.  The idea of the review is avoid the ideological component of the Chicago v. Post-Chicago debate by choosing to focus instead on the relative predictive power of the economic models.  In short, the evidence-based antitrust concept is one that requires running a horserace between the competing economic models of various forms of antitrust relevant behavior (I focus on RPM and exclusive dealing in the review) in order to identify the best available economic learning upon which antitrust policy can and should be built.  The standard requires flexibility over time but also commits policy makers to take seriously predictive power of models rather than grabbing whatever is convenient from the menu.

There are a number of other critical points about the volume, the approach to antitrust it advocates, and responses to specific essays in the review.

Here is the abstract:

Using George Stigler’s rules of intellectual engagement as a guide, and applying an evidence-based approach, this essay is a critical review of former Federal Trade Commission Chairman Robert Pitofsky’s How the Chicago School Overshot the Mark: The Effect of Conservative Economic Analysis on U.S. Antitrust, a collection of essays devoted to challenging the Chicago School’s approach to antitrust in favor of a commitment to Post-Chicago policies. Overshot the Mark is an important book and one that will be cited as intellectual support for a new and “reinvigorated” antitrust enforcement regime based on Post-Chicago economics. Its claims about the Chicago School’s stranglehold on modern antitrust, despite the existence of a perceived superior economic model in the Post-Chicago literature, are provocative. The central task of this review is to evaluate the book’s underlying premise that Post-Chicago economics literature provides better explanatory power than the “status quo” embodied in existing theory and evidence supporting Chicago School theory. I will conclude that the premise is mistaken. The simplest explanation of the Chicago School’s continued influence of U.S. antitrust policy — that its models provide superior explanatory power and policy relevance — cannot be rejected and is consistent with the available evidence.

You can download it here.