Archives For scholarship

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.



The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.


If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.


Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive


Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

As it begins its hundredth year, the FTC is increasingly becoming the Federal Technology Commission. The agency’s role in regulating data security, privacy, the Internet of Things, high-tech antitrust and patents, among other things, has once again brought to the forefront the question of the agency’s discretion and the sources of the limits on its power.Please join us this Monday, December 16th, for a half-day conference launching the year-long “FTC: Technology & Reform Project,” which will assess both process and substance at the FTC and recommend concrete reforms to help ensure that the FTC continues to make consumers better off.

FTC Commissioner Josh Wright will give a keynote luncheon address titled, “The Need for Limits on Agency Discretion and the Case for Section 5 UMC Guidelines.” Project members will discuss the themes raised in our inaugural report and how they might inform some of the most pressing issues of FTC process and substance confronting the FTC, Congress and the courts. The afternoon will conclude with a Fireside Chat with former FTC Chairmen Tim Muris and Bill Kovacic, followed by a cocktail reception.

Full Agenda:

  • Lunch and Keynote Address (12:00-1:00)
    • FTC Commissioner Joshua Wright
  • Introduction to the Project and the “Questions & Frameworks” Report (1:00-1:15)
    • Gus Hurwitz, Geoffrey Manne and Berin Szoka
  • Panel 1: Limits on FTC Discretion: Institutional Structure & Economics (1:15-2:30)
    • Jeffrey Eisenach (AEI | Former Economist, BE)
    • Todd Zywicki (GMU Law | Former Director, OPP)
    • Tad Lipsky (Latham & Watkins)
    • Geoffrey Manne (ICLE) (moderator)
  • Panel 2: Section 5 and the Future of the FTC (2:45-4:00)
    • Paul Rubin (Emory University Law and Economics | Former Director of Advertising Economics, BE)
    • James Cooper (GMU Law | Former Acting Director, OPP)
    • Gus Hurwitz (University of Nebraska Law)
    • Berin Szoka (TechFreedom) (moderator)
  • A Fireside Chat with Former FTC Chairmen (4:15-5:30)
    • Tim Muris (Former FTC Chairman | George Mason University) & Bill Kovacic (Former FTC Chairman | George Washington University)
  • Reception (5:30-6:30)
Our conference is a “widely-attended event.” Registration is $75 but free for nonprofit, media and government attendees. Space is limited, so RSVP today!

Working Group Members:
Howard Beales
Terry Calvani
James Cooper
Jeffrey Eisenach
Gus Hurwitz
Thom Lambert
Tad Lipsky
Geoffrey Manne
Timothy Muris
Paul Rubin
Joanna Shepherd-Bailey
Joe Sims
Berin Szoka
Sasha Volokh
Todd Zywicki

William Buckley once described a conservative as “someone who stands athwart history, yelling Stop.” Ironically, this definition applies to Professor Tim Wu’s stance against the Supreme Court applying the Constitution’s protections to the information age.

Wu admits he is going against the grain by fighting what he describes as leading liberals from the civil rights era, conservatives and economic libertarians bent on deregulation, and corporations practicing “First Amendment opportunism.” Wu wants to reorient our thinking on the First Amendment, limiting its domain to what he believes are its rightful boundaries.

But in his relatively recent piece in The New Republic and journal article in U Penn Law Review, Wu bites off more than he can chew. First, Wu does not recognize that the First Amendment is used “opportunistically” only because the New Deal revolution and subsequent jurisprudence has foreclosed all other Constitutional avenues to challenge economic regulations. Second, his positive formulation for differentiating protected speech from non-speech will lead to results counter to his stated preferences. Third, contra both conservatives like Bork and liberals like Wu, the Constitution’s protections can and should be adapted to new technologies, consistent with the original meaning.

Wu’s Irrational Lochner-Baiting

Wu makes the case that the First Amendment has been interpreted to protect things that aren’t really within the First Amendment’s purview. He starts his New Republic essay with Sorrell v. IMS (cf. TechFreedom’s Amicus Brief), describing the data mining process as something undeserving of any judicial protection. He deems the application of the First Amendment to economic regulation a revival of Lochner, evincing a misunderstanding of the case that appeals to undefended academic prejudice and popular ignorance. This is important because the economic liberty which was long protected by the Constitution, either as matter of federalism or substantive rights, no longer has any protection from government power aside from the First Amendment jurisprudence Wu decries.

Lochner v. New York is a 1905 Supreme Court case that has received more scorn, left and right, than just about any case that isn’t dealing with slavery or segregation. This has led to the phenomenon (my former Constitutional Law) Professor David Bernstein calls “Lochner-baiting,” where a commentator describes any Supreme Court decision with which he or she disagrees as Lochnerism. Wu does this throughout his New Republic piece, somehow seeing parallels between application of the First Amendment to the Internet and a Liberty of Contract case under substantive Due Process.

The idea that economic regulation should receive little judicial scrutiny is not new. In fact, it has been the operating law since at least the famous Carolene Products footnote four. However, the idea that only insular and discrete minorities should receive First Amendment protection is a novel application of law. Wu implicitly argues exactly this when he says “corporations are not the Jehovah’s Witnesses, unpopular outsiders needing a safeguard that legislators and law enforcement could not be moved to provide.” On the contrary, the application of First Amendment protections to Jehovah’s Witnesses and student protesters is part and parcel of the application of the First Amendment to advertising and data that drives the Internet. Just because Wu does not believe businesspersons need the Constitution’s protections does not mean they do not apply.

Finally, while Wu may be correct that the First Amendment should not apply to everything for which it is being asserted today, he does not seem to recognize why there is “First Amendment opportunism.” In theory, those trying to limit the power of government over economic regulation could use any number of provisions in the text of the Constitution: enumerated powers of Congress and the Tenth Amendment, the Ninth Amendment, the Contracts Clause, the Privileges or Immunities Clause of the Fourteenth Amendment, the Due Process Clause of the Fifth and Fourteenth Amendments, the Equal Protection Clause, etc. For much of the Constitution’s history, the combination of these clauses generally restricted the growth of government over economic affairs. Lochner was just one example of courts generally putting the burden on governments to show the restrictions placed upon economic liberty are outweighed by public interest considerations.

The Lochner court actually protected a small bakery run by immigrants from special interest legislation aimed at putting them out of business on behalf of bigger, established competitors. Shifting this burden away from government and towards the individual is not clearly the good thing Wu assumes. Applying the same Liberty of Contract doctrine, the Supreme Court struck down legislation enforcing housing segregation in Buchanan v. Warley and legislation outlawing the teaching of the German language in Meyer v. Nebraska. After the New Deal revolution, courts chose to apply only rational basis review to economic regulation, and would need to find a new way to protect fundamental rights that were once classified as economic in nature. The burden shifted to individuals to prove an economic regulation is not loosely related to any conceivable legitimate governmental purpose.

Now, the only Constitutional avenue left for a winnable challenge of economic regulation is the First Amendment. Under the rational basis test, the Tenth Circuit in Powers v. Harris actually found that protecting businesses from competition is a legitimate state interest. This is why the cat owner Wu references in his essay and describes in more detail in his law review article brought a First Amendment claim against a regime requiring licensing of his talking cat show: there is basically no other Constitutional protection against burdensome economic regulation.

The More You Edit, the More Your <sic> Protected?

In his law review piece, Machine Speech, Wu explains that the First Amendment has a functionality requirement. He points out that the First Amendment has never been interpreted to mean, and should not mean, that all communication is protected. Wu believes the dividing lines between protected and unprotected speech should be whether the communicator is a person attempting to communicate a specific message in a non-mechanical way to another, and whether the communication at issue is more speech than conduct. The first test excludes carriers and conduits that handle or process information but have an ultimately functional relationship with it–like Federal Express or a telephone company. The second excludes tools, those works that are purely functional like navigational charts, court filings, or contracts.

Of course, Wu admits the actual application of his test online can be difficult. In his law review article he deals with some easy cases, like the obvious application of the First Amendment to blog posts, tweets, and video games, and non-application to Google Maps. Of course, harder cases are the main target of his article: search engines, automated concierges, and other algorithm-based services. At the very end of his law review article, Wu finally states how to differentiate between protected speech and non-speech in such cases:

The rule of thumb is this: the more the concierge merely tells the user about himself, the more like a tool and less like protected speech the program is. The more the programmer puts in place his opinion, and tries to influence the user, the more likely there will be First Amendment coverage. These are the kinds of considerations that ultimately should drive every algorithmic output case that courts could encounter.

Unfortunately for Wu, this test would lead to results counterproductive to his goals.

Applying this rationale to Google, for instance, would lead to the perverse conclusion that the more the allegations against the company about tinkering with its algorithm to disadvantage competitors are true, the more likely Google would receive First Amendment protection. And if Net Neutrality advocates are right that ISPs are restricting consumer access to content, then the analogy to the newspaper in Tornillo becomes a good one–ISPs have a right to exercise editorial discretion and mandating speech would be unconstitutional. The application of Wu’s test to search engines and ISPs effectively puts them in a “use it or lose it” position with their First Amendment rights that courts have rejected. The idea that antitrust and FCC regulations can apply without First Amendment scrutiny only if search engines and ISPs are not doing anything requiring antitrust or FCC scrutiny is counterproductive to sound public policy–and presumably, the regulatory goals Wu holds.

First Amendment Dynamism

The application of the First Amendment to the Internet Age does not involve large leaps of logic from current jurisprudence. As Stuart Minor Benjamin shows in his article in the same issue of the U Penn Law Review, the bigger leap would be to follow Wu’s recommendations. We do not need a 21st Century First Amendment that some on the left have called for—the original one will do just fine.

This is because the Constitution’s protections can be dynamically applied, consistent with original meaning. Wu’s complaint is that he does not like how the First Amendment has evolved. Even his points that have merit, though, seem to indicate a stasis mentality. In her book, The Future and Its Enemies, Virginia Postrel described this mentality as a preference for a “controlled, uniform society that changes only with permission from some central authority.” But the First Amendment’s text is not a grant of power to the central authority to control or permit anything. It actually restricts government from intervening into the open-ended society where creativity and enterprise, operating under predictable rules, generate progress in unpredictable ways.

The application of current First Amendment jurisprudence to search engines, ISPs, and data mining will not necessarily create a world where machines have rights. Wu is right that the line must be drawn somewhere, but his technocratic attempt to empower government officials to control innovation is short-sighted. Ultimately, the First Amendment is as much about protecting the individuals who innovate and create online as those in the offline world. Such protection embraces the future instead of fearing it.

[Cross posted at the Center for the Protection of Intellectual Property blog.]

Today’s public policy debates frame copyright policy solely in terms of a “trade off” between the benefits of incentivizing new works and the social deadweight losses imposed by the access restrictions imposed by these (temporary) “monopolies.” I recently posted to SSRN a new research paper, called How Copyright Drives Innovation in Scholarly Publishing, explaining that this is a fundamental mistake that has distorted the policy debates about scholarly publishing.

This policy mistake is important because it has lead commentators and decision-makers to dismiss as irrelevant to copyright policy the investments by scholarly publishers of $100s of millions in creating innovative distribution mechanisms in our new digital world. These substantial sunk costs are in addition to the $100s of millions expended annually by publishers in creating, publishing and maintaining reliable, high-quality, standardized articles distributed each year in a wide-ranging variety of academic disciplines and fields of research. The articles now number in the millions themselves; in 2009, for instance, over 2,000 publishers issued almost 1.5 million articles just in the scientific, technical and medical fields, exclusive of the humanities and social sciences.

The mistaken incentive-to-invent conventional wisdom in copyright policy is further compounded by widespread misinformation today about the allegedly “zero cost” of digital publication. As a result, many people are simply unaware of the substantial investments in infrastructure, skilled labor and other resources required to create, publish and maintain scholarly articles on the Internet and in other digital platforms.

This is not merely a so-called “academic debate” about copyright policy and publishing.

The policy distortion caused by the narrow, reductionist incentive-to-create conventional wisdom, when combined with the misinformation about the economics of digital business models, has been spurring calls for “open access” mandates for scholarly research, such as at the National Institute of Health and in recently proposed legislation (FASTR Act) and in other proposed regulations. This policy distortion even influenced Justice Breyer’s opinion in the recent decision in Kirtsaeng v. John Wiley & Sons (U.S. Supreme Court, March 19, 2013), as he blithely dismissed commercial incentivizes as being irrelevant to fundamental copyright policy. These legal initiatives and the Kirtsaeng decision are motivated in various ways by the incentive-to-create conventional wisdom, by the misunderstanding of the economics of scholarly publishing, and by anti-copyright rhetoric on both the left and right, all of which has become more pervasive in recent years.

But, as I explain in my paper, courts and commentators have long recognized that incentivizing authors to produce new works is not the sole justification for copyright—copyright also incentivizes intermediaries like scholarly publishers to invest in and create innovative legal and market mechanisms for publishing and distributing articles that report on scholarly research. These two policies—the incentive to create and the incentive to commercialize—are interrelated, as both are necessary in justifying how copyright law secures the dynamic innovation that makes possible the “progress of science.” In short, if the law does not secure the fruits of labors of publishers who create legal and market mechanisms for disseminating works, then authors’ labors will go unrewarded as well.

As Justice Sandra Day O’Connor famously observed in the 1984 decision in Harper & Row v. Nation Enterprises: “In our haste to disseminate news, it should not be forgotten the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” Thus, in Harper & Row, the Supreme Court reached the uncontroversial conclusion that copyright secures the fruits of productive labors “where an author and publisher have invested extensive resources in creating an original work.” (emphases added)

This concern with commercial incentives in copyright law is not just theory; in fact, it is most salient in scholarly publishing because researchers are not motivated by the pecuniary benefits offered to authors in conventional publishing contexts. As a result of the policy distortion caused by the incentive-to-create conventional wisdom, some academics and scholars now view scholarly publishing by commercial firms who own the copyrights in the articles as “a form of censorship.” Yet, as courts have observed: “It is not surprising that [scholarly] authors favor liberal photocopying . . . . But the authors have not risked their capital to achieve dissemination. The publishers have.” As economics professor Mark McCabe observed (somewhat sardonically) in a research paper released last year for the National Academy of Sciences: he and his fellow academic “economists knew the value of their journals, but not their prices.”

The widespread ignorance among the public, academics and commentators about the economics of scholarly publishing in the Internet age is quite profound relative to the actual numbers.  Based on interviews with six different scholarly publishers—Reed Elsevier, Wiley, SAGE, the New England Journal of Medicine, the American Chemical Society, and the American Institute of Physics—my research paper details for the first time ever in a publication and at great length the necessary transaction costs incurred by any successful publishing enterprise in the Internet age.  To take but one small example from my research paper: Reed Elsevier began developing its online publishing platform in 1995, a scant two years after the advent of the World Wide Web, and its sunk costs in creating this first publishing platform and then digitally archiving its previously published content was over $75 million. Other scholarly publishers report similarly high costs in both absolute and relative terms.

Given the widespread misunderstandings of the economics of Internet-based business models, it bears noting that such high costs are not unique to scholarly publishers.  Microsoft reportedly spent $10 billion developing Windows Vista before it sold a single copy, of which it ultimately did not sell many at all. Google regularly invests $100s of millions, such as $890 million in the first quarter of 2011, in upgrading its data centers.  It is somewhat surprising that such things still have to be pointed out a scant decade after the bursting of the bubble, a bubble precipitated by exactly the same mistaken view that businesses have somehow been “liberated” from the economic realities of cost by the Internet.

Just as with the extensive infrastructure and staffing costs, the actual costs incurred by publishers in operating the peer review system for their scholarly journals are also widely misunderstood.  Individual publishers now receive hundreds of thousands—the large scholarly publisher, Reed Elsevier, receives more than one million—manuscripts per year. Reed Elsevier’s annual budget for operating its peer review system is over $100 million, which reflects the full scope of staffing, infrastructure, and other transaction costs inherent in operating a quality-control system that rejects 65% of the submitted manuscripts. Reed Elsevier’s budget for its peer review system is consistent with industry-wide studies that have reported that the peer review system costs approximately $2.9 billion annually in operation costs (translating into dollars the British £1.9 billion pounds reported in the study). For those articles accepted for publication, there are additional, extensive production costs, and then there are extensive post-publication costs in updating hypertext links of citations, cyber security of the websites, and related digital issues.

In sum, many people mistakenly believe that scholarly publishers are no longer necessary because the Internet has made moot all such intermediaries of traditional brick-and-mortar economies—a viewpoint reinforced by the equally mistaken incentive-to-create conventional wisdom in the copyright policy debates today. But intermediaries like scholarly publishers face the exact same incentive problems that is universally recognized for authors by the incentive-to-create conventional wisdom: no will make the necessary investments to create a work or to distribute if the fruits of their labors are not secured to them. This basic economic fact—dynamic development of innovative distribution mechanisms require substantial investment in both people and resources—is what makes commercialization an essential feature of both copyright policy and law (and of all intellectual property doctrines).

It is for this reason that copyright law has long promoted and secured the value that academics and scholars have come to depend on in their journal articles—reliable, high-quality, standardized, networked, and accessible research that meets the differing expectations of readers in a variety of fields of scholarly research. This is the value created by the scholarly publishers. Scholarly publishers thus serve an essential function in copyright law by making the investments in and creating the innovative distribution mechanisms that fulfill the constitutional goal of copyright to advance the “progress of science.”

DISCLOSURE: The paper summarized in this blog posting was supported separately by a Leonardo Da Vinci Fellowship and by the Association of American Publishers (AAP). The author thanks Mark Schultz for very helpful comments on earlier drafts, and the AAP for providing invaluable introductions to the five scholarly publishers who shared their publishing data with him.

NOTE: Some small copy-edits were made to this blog posting.


William & Mary’s Alan Meese has posted a terrific tribute to Robert Bork, who passed away this week.  Most of the major obituaries, Alan observes, have largely ignored the key role
Bork played in rationalizing antitrust, a body of law that veered sharply off course in the middle of the last century.  Indeed, Bork began his 1978 book, The Antitrust Paradox, by comparing the then-prevailing antitrust regime to the sheriff of a frontier town:  “He did not sift the evidence, distinguish between suspects, and solve crimes, but merely walked the main street and every so often pistol-whipped a few people.”  Bork went on to explain how antitrust, if focused on consumer welfare (which equated with allocative efficiency), could be reconceived in a coherent fashion.

It is difficult to overstate the significance of Bork’s book and his earlier writings on which it was based.  Chastened by Bork’s observations, the Supreme Court began correcting its antitrust mistakes in the mid-1970s.  The trend began with the 1977 Sylvania decision, which overruled a precedent making it per se illegal for manufacturers to restrict the territories in which their dealers could operate.  (Manufacturers seeking to enhance sales of their brand may wish to give dealers exclusive sales territories to protect them against “free-riding” on their demand-enhancing customer services; pre-Sylvania precedent made it hard for manufacturers to do this.)  Sylvania was followed by:

  • Professional Engineers (1978), which helpfully clarified that antitrust’s theretofore unwieldy “Rule of Reason” must be focused exclusively on competition;
  • Broadcast Music, Inc. (1979), which held that competitors’ price-tampering arrangements that reduce costs and enhance output may be legal;
  • NCAA (1984), which recognized that trade restraints among competitors may be necessary to create new products and services and thereby made it easier for competitors to enter into output-enhancing joint ventures;
  • Khan (1997), which abolished the ludicrous per se rule against maximum resale price maintenance;
  • Trinko (2004), which recognized that some monopoly pricing may aid consumers in the long run (by enhancing the incentive to innovate) and narrowly circumscribed the situations in which a firm has a duty to assist its rivals; and
  • Leegin (2007), which overruled a 96 year-old precedent declaring minimum resale price maintenance–a practice with numerous potential procompetitive benefits–to be per se illegal.

Bork’s fingerprints are all over these decisions.  Alan’s terrific post discusses several of them and provides further detail on Bork’s influence.

And while you’re checking out Alan’s Bork tribute, take a look at his recent post discussing my musings on the AALS hiring cartel.  Alan observes that AALS’s collusive tendencies reach beyond the lateral hiring context.  Who’d have guessed?

Available here.  Although not the first article to build on Orin Kerr’s brilliant paper, A Theory of Law (blog post here) (that honor belongs to Josh Blackman’s challenging and thought-provoking paper, My Own Theory of the Law) (blog post here), I think this is an important contribution to this burgeoning field.  It’s still a working paper, though, so comments are welcome.

In a response to my essay, The Trespass Fallacy in Patent Law, in which I explain why patent scholars like Michael Meurer, James Bessen, T.J. Chiang and others are committing the nirvana fallacy in their critiques of the patent system, my colleague, T.J. Chiang writes at PrawfsBlawg:

The Nirvana fallacy, at least as I understand it, is to compare an imperfect existing arrangement (such as the existing patent system) to a hypothetical idealized system. But the people comparing the patent system to real property—and I count myself among them—are not comparing it to an idealized fictional system, whether conceptualized as land boundaries or as estate boundaries. We are saying that, based on our everyday experiences, the real property system seems to work reasonably well because we don’t feel too uncertain about our real property rights and don’t get into too many disputes with our neighbors. This is admittedly a loose intuition, but it is not an idealization in the sense of using a fictional baseline. It is the same as saying that the patent system seems to work reasonably well because we see a lot of new technology in our everyday experience.

I would like to make two quick points in response to T.J.’s attempt at wiggling out from serving as one of the examples I identify in my essay as a patent scholar who uses trespass doctrine in a way that reflects the nirvana fallacy.

First, what T.J. describes as what he is doing — comparing an actual institutional system to a “loose intuition” about another institutional system — is exactly what Harold Demsetz identified as the nirvana fallacy (when he roughly coined the term in 1969).  When economists or legal scholars commit the nirvana fallacy, they always justify their idealized counterfactual standard by appeal to some intuition or gestalt sense of the world; in fact, Demsetz’s example of the nirvana fallacy is when economists have a loose intuition that regulation always works perfectly to fix market failures.  These economists do this for the simple reason that they’re social scientists, and so they have to make their critiques seem practical.

It’s like the infamous statement by Pauline Kael in 1972 (quoting from memory): “I can’t believe Nixon won, because I don’t know anyone who voted for him.” Similarly, what patent scholars like T.J. are doing is saying: “I can’t believe that trespass isn’t clear and efficient, because I don’t know anyone who has been involved in a trespass lawsuit or I don’t hear of any serious trespass lawsuits.”  Economists or legal scholars always have some anecdotal evidence — either personal experiences or merely an impressionistic intuition about other people — to offer as support for their counterfactual by which they’re evaluating (and criticizing) the actual facts of the world. The question is whether such an idealized counterfactual is a valid empirical metric or not; of course, it is not.  To do this is exactly what Demsetz criticized as the nirvana fallacy.

Ultimately, no social scientist or legal scholar ever commits the “nirvana fallacy” as T.J. has defined it in his blog posting, and this leads to my second point.  The best way to test T.J.’s definition is to ask: Does anyone know a single lawyer, legal scholar or economist who has committed the “nirvana fallacy” as defined by T.J.?  What economist or lawyer appeals to a completely imaginary “fictional baseline” as the standard for evaluating a real-world institution?

The answer to this question is obvious.  In fact, when I posited this exact question to T.J. in an exchange we had before he made his blog posting, he could not answer it.  The reason why he couldn’t answer it is because no one says in legal scholarship or in economic scholarship: “I have a completely made-up, imaginary ‘fictionalized’ world to which I’m going to compare to a real-world institution or legal doctrine.”  This is certainly is not the meaning of the nirvana fallacy, and I’m fairly sure Demsetz would be surprised to learn that he identified a fallacy that according to T.J. has never been committed by a single economist or legal scholar. Ever.

In sum, what T.J. describes in his blog posting — using a “loose intuition” of an institution an empirical standard for critiquing the operation of another institution — is the nirvana fallacy. Philosophers may posit completely imaginary and fictionalized baselines — it’s what they call “other worlds” — but that is not what social scientists and legal scholars do.  Demsetz was not talking about philosophers when he identified the nirvana fallacy.  Rather, he was talking about exactly what T.J. admits he does in his blog posting (and which he has done in his scholarship).

Thank you to Josh for inviting me to guest blog on Truth on the Market.  As my first blog posting, I thought TOTM readers would enjoy reading about my latest paper that I posted to SSRN, which has been getting some attention in the blogosphere (see here and here).  It’s a short, 17-page essay — see, it is possible that law professors can write short articles — called, The Trespass Fallacy in Patent Law.

This essay responds to the widely-heard cries today that the patent system is broken, as expressed in the popular press and by tech commentators, legal academics, lawyers, judges, congresspersons and just about everyone else.  The $1 billion verdict issued this past Friday against Samsung in Apple’s patent infringement lawsuit, hasn’t changed anything. (If anything, Judge Richard Posner finds the whole “smart phone war” to be Exhibit One in the indisputable case that the patent system is broken.)

Although there are many reasons why people think the patent system is systemically broken, one common refrain is that patents fail as property rights because patent infringement doctrine is not as clear, determinate and efficient as trespass doctrine is for real estate. Thus, the explicit standard that is invoked to justify why we must fix patent boundaries — or the patent system more generally — is that the patent system does not work as clearly and efficiently as fences and trespass doctrine do in real property. As Michael Meurer and James Bessen explicitly state in their book, Patent Failure: “An ideal patent system features rights that are defined as clearly as the fence around a piece of land.”

My essay explains that this is a fallacious argument, suffering both empirical and logical failings. Empirically, there are no formal studies of how trespass functions in litigation; thus, complaints about the patent system’s indeterminacy are based solely on an idealized theory of how trespass should function.  Often times, patent scholars, like my colleague, T.J. Chiang, just simply assert without any supporting evidence whatsoever that fences are “crystal clear” and thus there are “stable boundaries” for real estate; T.J. thus concludes that the patent system is working inefficiently and needs to be reformed (as captured in the very title of his article, Fixing Patent Boundaries). The variability in patent claim construction, asserts T.J. is tantamount to “the fence on your land . . . constantly moving in random directions. . . . Because patent claims are easily changed, they serve as poor boundaries, undermining the patent system for everyone.”

Other times, this idealized theory about trespass is given some credence by appeals to loose impressions or a gestalt of how trespass works, or there are appeals to anecdotes and personal stories about how well trespass functions in the real world. Bessen and Meurer do this in their book, Patent Failure, where they back up their claim that trespass is clear with a search they apparently did on Westlaw of innocent trespass cases in California in a 3-year period. Either way, assertions backed by intuitions or a few anecdotal cases cannot serve as an empirical standard by which one makes a systemic evaluation that we should shift to anther institutional arrangement because the current one is operating inefficiently. In short, the trespass standard represents the nirvana fallacy.

Even more important, anecdotal evidence and related studies suggest that trespass and other boundary disputes between landowners are neither as clear nor as determinate as patent scholars assume them to be (something I briefly summarize on in my essay and call for more empirical studies to be done).

Logically, the comparison of patent boundaries to trespass commits what philosophers would call a category mistake. It conflates the boundaries of an entire legal right (a patent), not with the boundaries of its conceptual counterpart (real estate), but rather with a single doctrine (trespass) that secures real estate only in a single dimension (geographic boundaries). As all 1Ls learn in their Property courses, real estate is not land. Accordingly, estate boundaries are defined along the dimensions of time, use and space, as represented in myriad doctrines like easements, nuisance, restrictive covenants, and future interests, among others. In fact, the overlapping possessory and use rights shared by owners of joint tenancies or by owners of possessory estates with overlapping future interests share many conceptual and doctrinal similarities to the overlapping rights that patent-owners may have over a single product in the marketplace (like a smart phone).  In short, the proper conceptual analog for patent boundaries is estate boundaries, not fences.

In sum, the trespass fallacy is driving an indeterminacy critique in patent law that is both empirically unverified and conceptually misleading, and check out my essay for much more evidence and more in-depth explanation of why this is the case.

My former student and recent George Mason Law graduate (and co-author, here) Angela Diveley has posted Clarifying State Action Immunity Under the Antitrust Laws: FTC v. Phoebe Putney Health System, Inc.  It is a look at the state action doctrine and the Supreme Court’s next chance to grapple with it in Phoebe Putney.  here is the abstract:

The tension between federalism and national competition policy has come to a head. The state action doctrine finds its basis in principles of federalism, permitting states to replace free competition with alternative regulatory regimes they believe better serve the public interest. Public restraints have a unique ability to undermine the regime of free competition that provides the basis of U.S.- and state-commerce policies. Nevertheless, preservation of federalism remains an important rationale for protecting such restraints. The doctrine has elusive contours, however, which have given rise to circuit splits and overbroad application that threatens to subvert the state action doctrine’s dual goals of federalism and competition. The recent Eleventh Circuit decision in FTC v. Phoebe Putney Health System, Inc. epitomizes the concerns associated with misapplication of state action immunity. The U.S. Supreme Court recently granted the FTC’s petition for certiorari and now has the opportunity to more clearly define the contours of the doctrine. In Phoebe Putney, the FTC has challenged a merger it claims is the product of a sham transaction, an allegation certain to test the boundaries of the state action doctrine and implicate the interpretation of a two-pronged test designed to determine whether consumer welfare-reducing conduct taken pursuant to purported state authorization is immune from antitrust challenge. The FTC’s petition for writ of certiorari raises two issues for review. First, it presents the question concerning the appropriate interpretation of foreseeability of anticompetitive conduct. Second, the FTC presents the question whether a passive supervisory role on the state’s part can be construed as state action or whether its approval of the merger was a sham. In this paper, I seek to explicate the areas in which the state action doctrine needs clarification and to predict how the Court will decide the case in light of precedent and the principles underlying the doctrine.

Go read the whole thing.