Archives For intellectual property

On August 14, the Federalist Society’s Regulatory Transparency Project released a report detailing the harm imposed on innovation and property rights by the Patent Trial and Appeals Board, a Patent and Trademark Office patent review agency created by the infelicitously-named “America Invents Act” of 2011.  As the report’s abstract explains:

Patents are property rights secured to inventors of new products or services, such as the software and other high-tech innovations in our laptops and smart phones, the life-saving medicines prescribed by our doctors, and the new mechanical designs that make batteries more efficient and airplane engines more powerful. Many Americans first learn in school about the great inventors who revolutionized our lives with their patented innovations, such as Thomas Edison (the light bulb and record player), Alexander Graham Bell (the telephone), Nikola Tesla (electrical systems), the Wright brothers (airplanes), Charles Goodyear (cured rubber), Enrico Fermi (nuclear power), and Samuel Morse (the telegraph). These inventors and tens of thousands of others had the fruits of their inventive labors secured to them by patents, and these vital property rights have driven America’s innovation economy for over 225 years. For this reason, the United States has long been viewed as having the “gold standard” patent system throughout the world.

In 2011, Congress passed a new law, called the America Invents Act (AIA), that made significant changes to the U.S. patent system. Among its many changes, the AIA created a new administrative tribunal for invalidating “bad patents” (patents mistakenly issued because the claimed inventions were not actually new or because they suffer from other defects that create problems for companies in the innovation economy). This administrative tribunal is called the Patent Trial & Appeal Board (PTAB). The PTAB is composed of “administrative patent judges” appointed by the Director of the United States Patent & Trademark Office (USPTO). The PTAB administrative judges are supposed to be experts in both technology and patent law. They hold administrative hearings in response to petitions that challenge patents as defective. If they agree with the challenger, they cancel the patent by declaring it “invalid.” Anyone in the world willing to pay a filing fee can file a petition to invalidate any patent.

As many people are aware, administrative agencies can become a source of costs and harms that far outweigh the harms they were created to address. This is exactly what has happened with the PTAB. This administrative tribunal has become a prime example of regulatory overreach

Congress created the PTAB in 2011 in response to concerns about the quality of patents being granted to inventors by the USPTO. Legitimate patents promote both inventive activity and the commercial development of inventions into real-world innovation used by regular people the world over. But “bad patents” clog the intricate gears of the innovation economy, deterring real innovators and creating unnecessary costs for companies by enabling needless and wasteful litigation. The creation of the PTAB was well intended: it was supposed to remove bad patents from the innovation economy. But the PTAB has ended up imposing tremendous and unnecessary costs and creating destructive uncertainty for the innovation economy.

In its procedures and its decisions, the PTAB has become an example of an administrative tribunal run amok. It does not provide basic legal procedures to patent owners that all other property owners receive in court. When called upon to redress these concerns, the courts have instead granted the PTAB the same broad deference they have given to other administrative agencies. Thus, these problems have gone uncorrected and unchecked. Without providing basic procedural protections to all patent owners, the PTAB has gone too far with its charge of eliminating bad patents. It is now invalidating patents in a willy-nilly fashion. One example among many is that, in early 2017, the PTAB invalidated a patent on a new MRI machine because it believed this new medical device was an “abstract idea” (and thus unpatentable).

The problems in the PTAB’s operations have become so serious that a former federal appellate chief judge has referred to PTAB administrative judges as “patent death squads.” This metaphor has proven apt, even if rhetorically exaggerated. Created to remove only bad patents clogging the innovation economy, the PTAB has itself begun to clog innovation — killing large numbers of patents and casting a pall of uncertainty over every patent that might become valuable and thus a target of a PTAB petition to invalidate it.

The U.S. innovation economy has thrived because inventors know they can devote years of productive labor and resources into developing their inventions for the marketplace, secure in the knowledge that their patents provide a solid foundation for commercialization. Pharmaceutical companies depend on their patents to recoup billions of dollars in research and development of new drugs. Venture capitalists invest in startups on the basis of these vital property rights in new products and services, as viewers of Shark Tank see every week.

The PTAB now looms over all of these inventive and commercial activities, threatening to cancel a valuable patent at any moment and without rhyme or reason. In addition to the lost investments in the invalidated patents themselves, this creates uncertainty for inventors and investors, undermining the foundations of the U.S. innovation economy.

This paper explains how the PTAB has become a prime example of regulatory overreach. The PTAB administrative tribunal is creating unnecessary costs for inventors and companies, and thus it is harming the innovation economy far beyond the harm of the bad patents it was created to remedy. First, we describe the U.S. patent system and how it secures property rights in technological innovation. Second, we describe Congress’s creation of the PTAB in 2011 and the six different administrative proceedings the PTAB uses for reviewing and canceling patents. Third, we detail the various ways that the PTAB is now causing real harm, through both its procedures and its substantive decisions, and thus threatening innovation.

The PTAB has created fundamental uncertainty about the status of all patent rights in inventions. The result is that the PTAB undermines the market value of patents and frustrates the role that these property rights serve in the investment in and commercial development of the new technological products and services that make many aspects of our modern lives seem like miracles.

In June 2017, the U.S. Supreme Court agreed to review the Oil States Energy case, raising the question of whether PTAB patent review “violates the Constitution by extinguishing private property rights through a non-Article III forum without a jury.”  A Supreme Court finding of unconstitutionality would be ideal.  But in the event the Court leaves PTAB patent review intact, legislation to curb the worst excesses of PTAB – such as the bipartisan “STRONGER Patent Act of 2017” – merits serious consideration.  Stay tuned – I will have more to say in detail about potential patent law reforms, including the reining in of PTAB, in the near future.

I recently published a piece in the Hill welcoming the Canadian Supreme Court’s decision in Google v. Equustek. In this post I expand (at length) upon my assessment of the case.

In its decision, the Court upheld injunctive relief against Google, directing the company to avoid indexing websites offering the infringing goods in question, regardless of the location of the sites (and even though Google itself was not a party in the case nor in any way held liable for the infringement). As a result, the Court’s ruling would affect Google’s conduct outside of Canada as well as within it.

The case raises some fascinating and thorny issues, but, in the end, the Court navigated them admirably.

Some others, however, were not so… welcoming of the decision (see, e.g., here and here).

The primary objection to the ruling seems to be, in essence, that it is the top of a slippery slope: “If Canada can do this, what’s to stop Iran or China from doing it? Free expression as we know it on the Internet will cease to exist.”

This is a valid concern, of course — in the abstract. But for reasons I explain below, we should see this case — and, more importantly, the approach adopted by the Canadian Supreme Court — as reassuring, not foreboding.

Some quick background on the exercise of extraterritorial jurisdiction in international law

The salient facts in, and the fundamental issue raised by, the case were neatly summarized by Hugh Stephens:

[The lower Court] issued an interim injunction requiring Google to de-index or delist (i.e. not return search results for) the website of a firm (Datalink Gateways) that was marketing goods online based on the theft of trade secrets from Equustek, a Vancouver, B.C., based hi-tech firm that makes sophisticated industrial equipment. Google wants to quash a decision by the lower courts on several grounds, primarily that the basis of the injunction is extra-territorial in nature and that if Google were to be subject to Canadian law in this case, this could open a Pandora’s box of rulings from other jurisdictions that would require global delisting of websites thus interfering with freedom of expression online, and in effect “break the Internet”.

The question of jurisdiction with regard to cross-border conduct is clearly complicated and evolving. But, in important ways, it isn’t anything new just because the Internet is involved. As Jack Goldsmith and Tim Wu (yes, Tim Wu) wrote (way back in 2006) in Who Controls the Internet?: Illusions of a Borderless World:

A government’s responsibility for redressing local harms caused by a foreign source does not change because the harms are caused by an Internet communication. Cross-border harms that occur via the Internet are not any different than those outside the Net. Both demand a response from governmental authorities charged with protecting public values.

As I have written elsewhere, “[g]lobal businesses have always had to comply with the rules of the territories in which they do business.”

Traditionally, courts have dealt with the extraterritoriality problem by applying a rule of comity. As my colleague, Geoffrey Manne (Founder and Executive Director of ICLE), reminds me, the principle of comity largely originated in the work of the 17th Century Dutch legal scholar, Ulrich Huber. Huber wrote that comitas gentium (“courtesy of nations”) required the application of foreign law in certain cases:

[Sovereigns will] so act by way of comity that rights acquired within the limits of a government retain their force everywhere so far as they do not cause prejudice to the powers or rights of such government or of their subjects.

And, notably, Huber wrote that:

Although the laws of one nation can have no force directly with another, yet nothing could be more inconvenient to commerce and to international usage than that transactions valid by the law of one place should be rendered of no effect elsewhere on account of a difference in the law.

The basic principle has been recognized and applied in international law for centuries. Of course, the flip side of the principle is that sovereign nations also get to decide for themselves whether to enforce foreign law within their jurisdictions. To summarize Huber (as well as Lord Mansfield, who brought the concept to England, and Justice Story, who brought it to the US):

All three jurists were concerned with deeply polarizing public issues — nationalism, religious factionalism, and slavery. For each, comity empowered courts to decide whether to defer to foreign law out of respect for a foreign sovereign or whether domestic public policy should triumph over mere courtesy. For each, the court was the agent of the sovereign’s own public law.

The Canadian Supreme Court’s well-reasoned and admirably restrained approach in Equustek

Reconciling the potential conflict between the laws of Canada and those of other jurisdictions was, of course, a central subject of consideration for the Canadian Court in Equustek. The Supreme Court, as described below, weighed a variety of factors in determining the appropriateness of the remedy. In analyzing the competing equities, the Supreme Court set out the following framework:

[I]s there a serious issue to be tried; would the person applying for the injunction suffer irreparable harm if the injunction were not granted; and is the balance of convenience in favour of granting the interlocutory injunction or denying it. The fundamental question is whether the granting of an injunction is just and equitable in all of the circumstances of the case. This will necessarily be context-specific. [Here, as throughout this post, bolded text represents my own, added emphasis.]

Applying that standard, the Court held that because ordering an interlocutory injunction against Google was the only practical way to prevent Datalink from flouting the court’s several orders, and because there were no sufficient, countervailing comity or freedom of expression concerns in this case that would counsel against such an order being granted, the interlocutory injunction was appropriate.

I draw particular attention to the following from the Court’s opinion:

Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, “Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong”.

And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case. As Groberman J.A. concluded:

In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation. The order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

In fact, as Andrew Keane Woods writes at Lawfare:

Under longstanding conflicts of laws principles, a court would need to weigh the conflicting and legitimate governments’ interests at stake. The Canadian court was eager to undertake that comity analysis, but it couldn’t do so because the necessary ingredient was missing: there was no conflict of laws.

In short, the Canadian Supreme Court, while acknowledging the importance of comity and appropriate restraint in matters with extraterritorial effect, carefully weighed the equities in this case and found that they favored the grant of extraterritorial injunctive relief. As the Court explained:

Datalink [the direct infringer] and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. Equustek has made efforts to locate Datalink with limited success. Datalink is only able to survive — at the expense of Equustek’s survival — on Google’s search engine which directs potential customers to Datalink’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to Equustek pending the trial, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.

As I have stressed, key to the Court’s reasoning was its close consideration of possible countervailing concerns and its entirely fact-specific analysis. By the very terms of the decision, the Court made clear that its balancing would not necessarily lead to the same result where sensibilities or core values of other nations would be offended. In this particular case, they were not.

How critics of the decision (and there are many) completely miss the true import of the Court’s reasoning

In other words, the holding in this case was a function of how, given the facts of the case, the ruling would affect the particular core concerns at issue: protection and harmonization of global intellectual property rights on the one hand, and concern for the “sensibilities of other nations,” including their concern for free expression, on the other.

This should be deeply reassuring to those now criticizing the decision. And yet… it’s not.

Whether because they haven’t actually read or properly understood the decision, or because they are merely grandstanding, some commenters are proclaiming that the decision marks the End Of The Internet As We Know It — you know, it’s going to break the Internet. Or something.

Human Rights Watch, an organization I generally admire, issued a statement including the following:

The court presumed no one could object to delisting someone it considered an intellectual property violator. But other countries may soon follow this example, in ways that more obviously force Google to become the world’s censor. If every country tries to enforce its own idea of what is proper to put on the Internet globally, we will soon have a race to the bottom where human rights will be the loser.

The British Columbia Civil Liberties Association added:

Here it was technical details of a product, but you could easily imagine future cases where we might be talking about copyright infringement, or other things where people in private lawsuits are wanting things to be taken down off  the internet that are more closely connected to freedom of expression.

From the other side of the traditional (if insufficiently nuanced) “political spectrum,” AEI’s Ariel Rabkin asserted that

[O]nce we concede that Canadian courts can regulate search engine results in Turkey, it is hard to explain why a Turkish court shouldn’t have the reciprocal right. And this is no hypothetical — a Turkish court has indeed ordered Twitter to remove a user (AEI scholar Michael Rubin) within the United States for his criticism of Erdogan. Once the jurisdictional question is decided, it is no use raising free speech as an issue. Other countries do not have our free speech norms, nor Canada’s. Once Canada concedes that foreign courts have the right to regulate Canadian search results, they are on the internet censorship train, and there is no egress before the end of the line.

In this instance, in particular, it is worth noting not only the complete lack of acknowledgment of the Court’s articulated constraints on taking action with extraterritorial effect, but also the fact that Turkey (among others) has hardly been waiting for approval from Canada before taking action.   

And then there’s EFF (of course). EFF, fairly predictably, suggests first — with unrestrained hyperbole — that the Supreme Court held that:

A country has the right to prevent the world’s Internet users from accessing information.

Dramatic hyperbole aside, that’s also a stilted way to characterize the content at issue in the case. But it is important to EFF’s misleading narrative to begin with the assertion that offering infringing products for sale is “information” to which access by the public is crucial. But, of course, the distribution of infringing products is hardly “expression,” as most of us would understand that term. To claim otherwise is to denigrate the truly important forms of expression that EFF claims to want to protect.

And, it must be noted, even if there were expressive elements at issue, infringing “expression” is always subject to restriction under the copyright laws of virtually every country in the world (and free speech laws, where they exist).

Nevertheless, EFF writes that the decision:

[W]ould cut off access to information for U.S. users would set a dangerous precedent for online speech. In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

As always with EFF missives, caveat lector applies: None of this is fair or accurate. EFF (like the other critics quoted above) is looking only at the result — the specific contours of the global order related to the Internet — and not to the reasoning of the decision itself.

Quite tellingly, EFF urges its readers to ignore the case in front of them in favor of a theoretical one. That is unfortunate. Were EFF, et al. to pay closer attention, they would be celebrating this decision as a thoughtful, restrained, respectful, and useful standard to be employed as a foundational decision in the development of global Internet governance.

The Canadian decision is (as I have noted, but perhaps still not with enough repetition…) predicated on achieving equity upon close examination of the facts, and giving due deference to the sensibilities and core values of other nations in making decisions with extraterritorial effect.

Properly understood, the ruling is a shield against intrusions that undermine freedom of expression, and not an attack on expression.

EFF subverts the reasoning of the decision and thus camouflages its true import, all for the sake of furthering its apparently limitless crusade against all forms of intellectual property. The ruling can be read as an attack on expression only if one ascribes to the distribution of infringing products the status of protected expression — so that’s what EFF does. But distribution of infringing products is not protected expression.

Extraterritoriality on the Internet is complicated — but that undermines, rather than justifies, critics’ opposition to the Court’s analysis

There will undoubtedly be other cases that present more difficult challenges than this one in defining the jurisdictional boundaries of courts’ abilities to address Internet-based conduct with multi-territorial effects. But the guideposts employed by the Supreme Court of Canada will be useful in informing such decisions.

Of course, some states don’t (or won’t, when it suits them), adhere to principles of comity. But that was true long before the Equustek decision. And, frankly, the notion that this decision gives nations like China or Iran political cover for global censorship is ridiculous. Nations that wish to censor the Internet will do so regardless. If anything, reference to this decision (which, let me spell it out again, highlights the importance of avoiding relief that would interfere with core values or sensibilities of other nations) would undermine their efforts.

Rather, the decision will be far more helpful in combating censorship and advancing global freedom of expression. Indeed, as noted by Hugh Stephens in a recent blog post:

While the EFF, echoed by its Canadian proxy OpenMedia, went into hyperventilation mode with the headline, “Top Canadian Court permits Worldwide Internet Censorship”, respected organizations like the Canadian Civil Liberties Association (CCLA) welcomed the decision as having achieved the dual objectives of recognizing the importance of freedom of expression and limiting any order that might violate that fundamental right. As the CCLA put it,

While today’s decision upholds the worldwide order against Google, it nevertheless reflects many of the freedom of expression concerns CCLA had voiced in our interventions in this case.

As I noted in my piece in the Hill, this decision doesn’t answer all of the difficult questions related to identifying proper jurisdiction and remedies with respect to conduct that has global reach; indeed, that process will surely be perpetually unfolding. But, as reflected in the comments of the Canadian Civil Liberties Association, it is a deliberate and well-considered step toward a fair and balanced way of addressing Internet harms.

With apologies for quoting myself, I noted the following in an earlier piece:

I’m not unsympathetic to Google’s concerns. As a player with a global footprint, Google is legitimately concerned that it could be forced to comply with the sometimes-oppressive and often contradictory laws of countries around the world. But that doesn’t make it — or any other Internet company — unique. Global businesses have always had to comply with the rules of the territories in which they do business… There will be (and have been) cases in which taking action to comply with the laws of one country would place a company in violation of the laws of another. But principles of comity exist to address the problem of competing demands from sovereign governments.

And as Andrew Keane Woods noted:

Global takedown orders with no limiting principle are indeed scary. But Canada’s order has a limiting principle. As long as there is room for Google to say to Canada (or France), “Your order will put us in direct and significant violation of U.S. law,” the order is not a limitless assertion of extraterritorial jurisdiction. In the instance that a service provider identifies a conflict of laws, the state should listen.

That is precisely what the Canadian Supreme Court’s decision contemplates.

No one wants an Internet based on the lowest common denominator of acceptable speech. Yet some appear to want an Internet based on the lowest common denominator for the protection of original expression. These advocates thus endorse theories of jurisdiction that would deny societies the ability to enforce their own laws, just because sometimes those laws protect intellectual property.

And yet that reflects little more than an arbitrary prioritization of those critics’ personal preferences. In the real world (including the real online world), protection of property is an important value, deserving reciprocity and courtesy (comity) as much as does speech. Indeed, the G20 Digital Economy Ministerial Declaration adopted in April of this year recognizes the importance to the digital economy of promoting security and trust, including through the provision of adequate and effective intellectual property protection. Thus the Declaration expresses the recognition of the G20 that:

[A]pplicable frameworks for privacy and personal data protection, as well as intellectual property rights, have to be respected as they are essential to strengthening confidence and trust in the digital economy.

Moving forward in an interconnected digital universe will require societies to make a series of difficult choices balancing both competing values and competing claims from different jurisdictions. Just as it does in the offline world, navigating this path will require flexibility and skepticism (if not rejection) of absolutism — including with respect to the application of fundamental values. Even things like freedom of expression, which naturally require a balancing of competing interests, will need to be reexamined. We should endeavor to find that fine line between allowing individual countries to enforce their own national judgments and a tolerance for those countries that have made different choices. This will not be easy, as well manifested in something that Alice Marwick wrote earlier this year:

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

* * *

We need to move beyond this simplistic binary of free speech/censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs…. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

While I (and the Canadian Supreme Court, for that matter) share EFF’s unease over the scope of extraterritorial judgments, I fundamentally disagree with EFF that the Equustek decision “largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights.”

In fact, it is EFF’s position that comes much closer to a position indifferent to the laws and values of other countries; in essence, EFF’s position would essentially always prioritize the particular speech values adopted in the US, regardless of whether they had been adopted by the countries affected in a dispute. It is therefore inconsistent with the true nature of comity.

Absolutism and exceptionalism will not be a sound foundation for achieving global consensus and the effective operation of law. As stated by the Canadian Supreme Court in Equustek, courts should enforce the law — whatever the law is — to the extent that such enforcement does not substantially undermine the core sensitivities or values of nations where the order will have effect.

EFF ignores the process in which the Court engaged precisely because EFF — not another country, but EFF — doesn’t find the enforcement of intellectual property rights to be compelling. But that unprincipled approach would naturally lead in a different direction where the court sought to protect a value that EFF does care about. Such a position arbitrarily elevates EFF’s idiosyncratic preferences. That is simply not a viable basis for constructing good global Internet governance.

If the Internet is both everywhere and nowhere, our responses must reflect that reality, and be based on the technology-neutral application of laws, not the abdication of responsibility premised upon an outdated theory of tech exceptionalism under which cyberspace is free from the application of the laws of sovereign nations. That is not the path to either freedom or prosperity.

To realize the economic and social potential of the Internet, we must be guided by both a determination to meaningfully address harms, and a sober reservation about interfering in the affairs of other states. The Supreme Court of Canada’s decision in Google v. Equustek has planted a flag in this space. It serves no one to pretend that the Court decided that a country has the unfettered right to censor the Internet. That’s not what it held — and we should be grateful for that. To suggest otherwise may indeed be self-fulfilling.

Too much ink has been spilled in an attempt to gin up antitrust controversies regarding efforts by holders of “standard essential patents” (SEPs, patents covering technologies that are adopted as part of technical standards relied upon by manufacturers) to obtain reasonable returns to their property. Antitrust theories typically revolve around claims that SEP owners engage in monopolistic “hold-up” when they threaten injunctions or seek “excessive” royalties (or other “improperly onerous” terms) from potential licensees in patent licensing negotiations, in violation of pledges (sometimes imposed by standard-setting organizations) to license on “fair, reasonable, and non-discriminatory” (FRAND) terms. As Professors Joshua Wright and Douglas Ginsburg, among others, have explained, contract law, tort law, and patent law are far better placed to handle “FRAND-related” SEP disputes than antitrust law. Adding antitrust to the litigation mix generates unnecessary costs and inefficiently devalues legitimate private property rights.

Concerns by antitrust mavens that other areas of law are insufficient to cope adequately with SEP-FRAND disputes are misplaced. A fascinating draft law review article by Koren Wrong-Ervin, Director of the Scalia Law School’s Global Antitrust Institute, and Anne Layne-Farrar, Vice President of Charles River Associates, does an admirable job of summarizing key decisions by U.S. and foreign courts involved in determining FRAND rates in SEP litigation, and in highlighting key economic concepts underlying these holdings. As explained in the article’s abstract:

In the last several years, courts around the world, including in China, the European Union, India, and the United States, have ruled on appropriate methodologies for calculating either a reasonable royalty rate or reasonable royalty damages on standard-essential patents (SEPs) upon which a patent holder has made an assurance to license on fair, reasonable and nondiscriminatory (FRAND) terms. Included in these decisions are determinations about patent holdup, licensee holdout, the seeking of injunctive relief, royalty stacking, the incremental value rule, reliance on comparable licenses, the appropriate revenue base for royalty calculations, and the use of worldwide portfolio licensing. This article provides an economic and comparative analysis of the case law to date, including the landmark 2013 FRAND-royalty determination issued by the Shenzhen Intermediate People’s Court (and affirmed by the Guangdong Province High People’s Court) in Huawei v. InterDigital; numerous U.S. district court decisions; recent seminal decisions from the United States Court of Appeals for the Federal Circuit in Ericsson v. D-Link and CISCO v. CSIRO; the six recent decisions involving Ericsson issued by the Delhi High Court; the European Court of Justice decision in Huawei v. ZTE; and numerous post- Huawei v. ZTE decisions by European Union member states. While this article focuses on court decisions, discussions of the various agency decisions from around the world are also included throughout.   

To whet the reader’s appetite, key economic policy and factual “takeaways” from the article, which are reflected implicitly in a variety of U.S. and foreign judicial holdings, are as follows:

  • Holdup of any form requires lock-in, i.e., standard-implementing companies with asset-specific investments locked in to the technologies defining the standard or SEP holders locked in to licensing in the context of a standard because of standard-specific research and development (R&D) leading to standard-specific patented technologies.
  • Lock-in is a necessary condition for holdup, but it is not sufficient. For holdup in any guise to actually occur, there also must be an exploitative action taken by the relevant party once lock-in has happened. As a result, the mere fact that a license agreement was signed after a patent was included in a standard is not enough to establish that the patent holder is practicing holdup—there must also be evidence that the SEP holder took advantage of the licensee’s lock-in, for example by charging supra-FRAND royalties that it could not otherwise have charged but for the lock-in.
  • Despite coming after a particular standard is published, the vast majority of SEP licenses are concluded in arm’s length, bilateral negotiations with no allegations of holdup or opportunistic behavior. This follows because market mechanisms impose a number of constraints that militate against acting on the opportunity for holdup.
  • In order to support holdup claims, an expert must establish that the terms and conditions in an SEP licensing agreement generate payments that exceed the value conveyed by the patented technology to the licensor that signed the agreement.
  • The threat of seeking injunctive relief, on its own, cannot lead to holdup unless that threat is both credible and actionable. Indeed, the in terrorem effect of filing for an injunction depends on the likelihood of its being granted. Empirical evidence shows a significant decline in the number of injunctions sought as well as in the actual rate of injunctions granted in the United States following the Supreme Court’s 2006 decision in eBay v. MercExchange LLC, which ended the prior nearly automatic granting of injunctions to patentees and instead required courts to apply a traditional four-part equitable test for granting injunctive relief.
  • The Federal Circuit has recognized that an SEP holder’s ability to seek injunctive relief is an important safeguard to help prevent potential licensee holdout, whereby an SEP infringer unilaterally refuses a FRAND royalty or unreasonably delays negotiations to the same effect.
  • Related to the previous point, seeking an injunction against a licensee who is delaying or not negotiating in good faith need not actually result in an injunction. The fact that a court finds a licensee is holding out and/or not engaging in good faith licensing discussions can be enough to spur a license agreement as opposed to a permanent injunction.
  • FRAND rates should reflect the value of the SEPs at issue, so it makes no economic sense to estimate an aggregate rate for a standard by assuming that all SEP holders would charge the same rate as the one being challenged in the current lawsuit.
  • Moreover, as the U.S. Court of Appeals for the Federal Circuit has held, allegations of “royalty stacking” – the allegedly “excessive” aggregate burden of high licensing fees stemming from multiple patents that cover a single product – should be backed by case-specific evidence.
  • Most importantly, when a judicial FRAND assessment is focused on the value that the SEP portfolio at issue has contributed to the standard and products embodying the standard, the resulting rates and terms will necessarily avoid both patent holdup and royalty stacking.

In sum, the Wong-Ervin and Layne-Farrar article highlights economic insights that are reflected in the sounder judicial opinions dealing with the determination of FRAND royalties.  The article points the way toward methodologies that provide SEP holders sufficient returns on their intellectual property to reward innovation and maintain incentives to invest in technologies that enhance the value of standards.  Read it and learn.

R Street’s Sasha Moss recently posted a piece on TechDirt describing the alleged shortcomings of the Register of Copyrights Selection and Accountability Act of 2017 (RCSAA) — proposed legislative adjustments to the Copyright Office, recently passed in the House and introduced in the Senate last month (with identical language).

Many of the article’s points are well taken. Nevertheless, they don’t support the article’s call for the Senate to “jettison [the bill] entirely,” nor the assertion that “[a]s currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.”

R Street’s main complaint with the legislation is that it doesn’t include other proposals in a House Judiciary Committee whitepaper on Copyright Office modernization. But condemning the RCSAA simply for failing to incorporate all conceivable Copyright Office improvements fails to adequately take account of the political realities confronting Congress — in other words, it lets the perfect be the enemy of the good. It also undermines R Street’s own stated preference for Copyright Office modernization effected through “targeted and immediately implementable solutions.”

Everyone — even R Street — acknowledges that we need to modernize the Copyright office. But none of the arguments in favor of a theoretical, “better” bill is undermined or impeded by passing this bill first. While there is certainly more that Congress can do on this front, the RCSAA is a sensible, targeted piece of legislation that begins to build the new foundation for a twenty-first century Copyright Office.

Process over politics

The proposed bill is simple: It would make the Register of Copyrights a nominated and confirmed position. For reasons almost forgotten over the last century and a half, the head of the Copyright Office is currently selected at the sole discretion of the Librarian of Congress. The Copyright Office was placed in the Library merely as a way to grow the Library’s collection with copies of copyrighted works.

More than 100 years later, most everyone acknowledges that the Copyright Office has lagged behind the times. And many think the problem lies with the Office’s placement within the Library, which is plagued with information technology and other problems, and has a distinctly different mission than the Copyright Office. The only real question is what to do about it.

Separating the the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization. And yet, somewhat inexplicably, R Street claims that the bill

amounts largely to a partisan battle over who will have the power to select the next Register: [Current Librarian of Congress] Hayden, who was appointed by Barack Obama, or President Donald Trump.

But this is a pretty farfetched characterization.

First, the House passed the bill 378-48, with 145 Democrats joining 233 Republicans in support. That’s more than three-quarters of the Democratic caucus.

Moreover, legislation to make the Register a nominated and confirmed position has been under discussion for more than four years — long before either Dr. Hayden was nominated or anyone knew that Donald Trump (or any Republican at all, for that matter) would be president.

R Street also claims that the legislation

will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, [and that] the nomination process could delay modernization efforts [because of Trump’s] confirmation backlog.

But precisely the opposite seems far more likely — as Sasha herself has previously recognized:

Clarifying the office’s lines of authority does have the benefit of making it more politically accountable…. The [House] bill takes a positive step forward in promoting accountability.

As far as I’m aware, no one claims that Dr. Hayden was “politicized” or that Librarians are vulnerable to capture because they are nominated and confirmed. And a Senate confirmation process will be more transparent than unilateral appointment by the Librarian, and will give the electorate a (nominal) voice in the Register’s selection. Surely unilateral selection of the Register by the Librarian is more susceptible to undue influence.

With respect to the modernization process, we should also not forget that the Copyright Office currently has an Acting Register in Karyn Temple Claggett, who is perfectly capable of moving the modernization process forward. And any limits on her ability to do so would arise from the very tenuousness of her position that the RCSAA is intended to address.

Modernizing the Copyright Office one piece at a time

It’s certainly true, as the article notes, that the legislation doesn’t include a number of other sensible proposals for Copyright Office modernization. In particular, it points to ideas like forming a stakeholder advisory board, creating new chief economist and technologist positions, upgrading the Office’s information technology systems, and creating a small claims court.

To be sure, these could be beneficial reforms, as ICLE (and many others) have noted. But I would take some advice from R Street’s own “pragmatic approach” to promoting efficient government “with the full realization that progress on the ground tends to be made one inch at a time.”

R Street acknowledges that the legislation’s authors have indicated that this is but a beginning step and that they plan to tackle the other issues in due course. At a time when passage of any legislation on any topic is a challenge, it seems appropriate to defer to those in Congress who affirmatively want more modernization about how big a bill to start with.

In any event, it seems perfectly sensible to address the Register selection process before tackling the other issues, which may require more detailed discussions of policy and cost. And with the Copyright Office currently lacking a permanent Register and discussions underway about finding a new one, addressing any changes Congress deems necessary in the selection process seems like the most pressing issue, if they are to be resolved prior to the next pick being made.

Further, because the Register would presumably be deeply involved in the selection and operation of any new advisory board, chief economist and technologist, IT system, or small claims process, Congress can also be forgiven for wanting to address the Register issue first. Moreover, a Register who can be summarily dismissed by the Librarian likely doesn’t have the needed autonomy to fully and effectively implement the other proposals from the whitepaper. Why build a house on a shaky foundation when you can fix the foundation first?

Process over substance

All of which leaves the question why R Street opposes a bill that was passed by a bipartisan supermajority in the House; that effects precisely the kind of targeted, incremental reform that R Street promotes; and that implements a specific reform that R Street favors.

The legislation has widespread support beyond Congress, although the TechDirt piece gives this support short shrift. Instead, it notes that “some” in the content industry support the legislation, but lists only the Motion Picture Association of America. There is a subtle undercurrent of the typical substantive copyright debate, in which “enlightened” thinking on copyright is set against the presumptively malicious overreach of the movie studios. But the piece neglects to mention the support of more than 70 large and small content creators, technology companies, labor unions, and free market and civil rights groups, among others.

Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates. But it’s difficult to escape. Copyright minimalists are skeptical of an effectual Copyright Office if it is more likely to promote policies that reinforce robust copyright, even if they support sensible process reforms and more-accountable government in the abstract. And, to be fair, copyright proponents are thrilled when their substantive positions might be bolstered by promotion of sensible process reforms.

But the truth is that no one really knows how an independent and accountable Copyright Office will act with respect to contentious, substantive issues. Perhaps most likely, increased accountability via nomination and confirmation will introduce more variance in its positions. In other words, on substance, the best guess is that greater Copyright Office accountability and modernization will be a wash — leaving only process itself as a sensible basis on which to assess reform. And on that basis, there is really no reason to oppose this widely supported, incremental step toward a modern US Copyright Office.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

The Scalia Law School’s Global Antitrust Institute (GAI) has once again penned a trenchant law and economics-based critique of a foreign jurisdiction’s competition policy pronouncement.  On April 28, the GAI posted a comment (GAI Comment) in response to a “Communication from the [European] Commission (EC) on Standard Essential Patents (SEPs) for a European Digitalised Economy” (EC Communication).  The EC Communication centers on the regulation of SEPs, patents which cover standards that enable mobile wireless technologies (in particular, smartphones), in the context of the development and implementation of the 5th generation or “5G” broadband wireless standard.

The GAI Comment expresses two major concerns with the EC’s Communication.

  1. The Communication’s Ill-Considered Opposition to Competition in Standards Development

First, the Comment notes that the EC Communication appears to view variation in intellectual property rights (IPR) policies among standard-development organizations (SDOs) as a potential problem that may benefit from best practice recommendations.  The GAI Comment strongly urges the EC to reconsider this approach.  It argues that the EC instead should embrace the procompetitive benefits of variation among SDO policies, and avoid one-size fits all best practice recommendations that may interfere with or unduly influence choices regarding specific rules that best fit the needs of individual SDOs and their members.

  1. The Communication’s Failure to Address the Question of Market Imperfections

Second, the Comment points out that the EC Communication refers to the need for “better regulation,” without providing evidence of an identifiable market imperfection, which is a necessary but not sufficient basis for economic regulation.  The Comment stresses that the smartphone market, which is both standard and patent intensive, has experienced exponential output growth, falling market concentration, and a decrease in wireless service prices relative to the overall consumer price index.  These indicators, although not proof of causation, do suggest caution prior to potentially disrupting the carefully balanced fair, reasonable, and non-discriminatory (FRAND) ecosystem that has emerged organically.

With respect to the three specific areas identified in the Communication (i.e., best practice recommendations on (1) “increased transparency on SEP exposure,” including “more precision and rigour into the essentiality declaration system in particular for critical standards”; (2) boundaries of FRAND and core valuation principles; and (3) enforcement in areas such as mutual obligations in licensing negotiations before recourse to injunctive relief, portfolio licensing, and the role of alternative dispute resolution mechanisms), the Comment recommends that the EC broaden the scope of its consultation to elicit specific evidence of identifiable market imperfections.

The GAI Comment also points out that, in some cases, specific concerns mentioned in the Consultation seem to be contradicted by the EC’s own published research.  For example, with respect to the asserted problems arising from over-declaration of essential patents, the EC recently published research noting the lack of “any reliable evidence that licensing costs increase significantly if SEP owners over-declare,” and concluding “that, per se the negative impact of over-declaration is likely to be minimal.”  Even assuming there is an identifiable market imperfection in this area, it is important to consider that determining essentiality is a resource and time-intensive exercise and there are likely significant transaction-cost savings from the use of blanket declarations, which also serve to avoid liability for patent-ambush (i.e., deceptive failure to disclose essential patents during the standard-setting process).

  1. Concluding Thoughts

The GAI Comment implicitly highlights a flaw inherent in the EC’s efforts to promote high tech innovation in Europe through its “Digital Agenda,” characterized as a pillar of the Europe “2020 Strategy” that sets objectives for the growth of the European Union by 2020.  The EC’s strategy emphasizes government-centric “growth through regulatory oversight,” rather than reliance on untrammeled competition.  This emphasis is at odds with the fact that detailed regulatory oversight has been associated with sluggish economic growth within the European Union.  It also ignores the fact that some of the most dynamic, innovative industries in recent decades have been those enabled by the Internet, which until recently has largely avoided significant regulation.  The EC may want to rethink its approach, if it truly wants to generate the innovation and economic gains long-promised to its consumers and producers.

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

Nicolas Petit is Professor of Law at the University of Liege (Belgium) and Research Professor at the University of South Australia (UniSA)

This symposium offers a good opportunity to look again into the complex relation between concentration and innovation in antitrust policy. Whilst the details of the EC decision in Dow/Dupont remain unknown, the press release suggests that the issue of “incentives to innovate” was central to the review. Contrary to what had leaked in the antitrust press, the decision has apparently backed off from the introduction of a new “model”, and instead followed a more cautious approach. After a quick reminder of the conventional “appropriability v cannibalizationframework that drives merger analysis in innovation markets (1), I make two sets of hopefully innovative remarks on appropriability and IP rights (2) and on cannibalization in the ag-biotech sector (3).

Appropriability versus cannibalization

Antitrust economics 101 teach that mergers affect innovation incentives in two polar ways. A merger may increase innovation incentives. This occurs when the increment in power over price or output achieved through merger enhances the appropriability of the social returns to R&D. The appropriability effect of mergers is often tied to Joseph Schumpeter, who observed that the use of “protecting devices” for past investments like patent protection or trade secrecy constituted a “normal elemen[t] of rational management”. The appropriability effect can in principle be observed at firm – specific incentives – and industry – general incentives – levels, because actual or potential competitors can also use the M&A market to appropriate the payoffs of R&D investments.

But a merger may decrease innovation incentives. This happens when the increased industry position achieved through merger discourages the introduction of new products, processes or services. This is because an invention will cannibalize the merged entity profits in proportions larger as would be the case in a more competitive market structure. This idea is often tied to Kenneth Arrow who famously observed that a “preinvention monopoly power acts as a strong disincentive to further innovation”.

Schumpeter’s appropriability hypothesis and Arrow’s cannibalization theory continue to drive much of the discussion on concentration and innovation in antitrust economics. True, many efforts have been made to overcome, reconcile or bypass both views of the world. Recent studies by Carl Shapiro or Jon Baker are worth mentioning. But Schumpeter and Arrow remain sticky references in any discussion of the issue. Perhaps more than anything, the persistence of their ideas denotes that both touched a bottom point when they made their seminal contribution, laying down two systems of belief on the workings of innovation-driven markets.

Now beyond the theory, the appropriability v cannibalization gravitational models provide from the outset an appealing framework for the examination of mergers in R&D driven industries in general. From an operational perspective, the antitrust agency will attempt to understand if the transaction increases appropriability – which leans in favour of clearance – or cannibalization – which leans in favour of remediation. At the same time, however, the downside of the appropriability v cannibalization framework (and of any framework more generally) may be to oversimplify our understanding of complex phenomena. This, in turn, prompts two important observations on each branch of the framework.

Appropriability and IP rights

Any antitrust agency committed to promoting competition and innovation should consider mergers in light of the degree of appropriability afforded by existing protecting devices (essentially contracts and entitlements). This is where Intellectual Property (“IP”) rights become relevant to the discussion. In an industry with strong IP rights, the merging parties (and its rivals) may be able to appropriate the social returns to R&D without further corporate concentration. Put differently, the stronger the IP rights, the lower the incremental contribution of a merger transaction to innovation, and the higher the case for remediation.

This latter proposition, however, rests on a heavy assumption: that IP rights confer perfect appropriability. The point is, however, far from obvious. Most of us know that – and our antitrust agencies’ misgivings with other sectors confirm it – IP rights are probabilistic in nature. There is (i) no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change. Arrow himself called for caution, noting that “Patent laws would have to be unimaginably complex and subtle to permit [such] appropriation on a large scale”. A thorough inquiry into the specific industry-strength of IP rights that goes beyond patent data and statistics thus constitutes a necessary step in merger review.

But it is not a sufficient one. The proposition that strong IP rights provide appropriability is essentially valid if the observed pre-merger market situation is one where several IP owners compete on differentiated products and as a result wield a degree of market power. In contrast, the proposition is essentially invalid if the observed pre-merger market situation leans more towards the competitive equilibrium and IP owners compete at prices closer to costs. In both variants, the agency should thus look carefully at the level and evolution of prices and costs, including R&D ones, in the pre-merger industry. Moreover, in the second variant, the agency ought to consider as a favourable appropriability factor any increase of the merging entity’s power over price, but also any improvement of its power over cost. By this, I have in mind efficiency benefits, which can arise as the result of economies of scale (in manufacturing but also in R&D), but also when the transaction combines complementary technological and marketing assets. In Dow/Dupont, no efficiency argument has apparently been made by the parties, so it is difficult to understand if and how such issues have played a role in the Commission’s assessment.

Cannibalization, technological change, and drastic innovation

Arrow’s cannibalization theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fails to capture that successful inventions create new technology frontiers, and with them entirely novel needs that even a monopolist has an incentive to serve. This can be understood with an example taken from the ag-biotech field. It is undisputed that progress in crop protection science has led to an expanding range of resistant insects, weeds, and pathogens. This, in turn, is one (if not the main) key drivers of ag-tech research. In a 2017 paper published in Pest Management Science, Sparks and Lorsbach observe that:

resistance to agrochemicals is an ongoing driver for the development of new chemical control options, along with an increased emphasis on resistance management and how these new tools can fit into resistance management programs. Because resistance is such a key driver for the development of new agrochemicals, a highly prized attribute for a new agrochemical is a new MoA [method of action] that is ideally a new molecular target either in an existing target site (e.g., an unexploited binding site in the voltage-gated sodium channel), or new/under-utilized target site such as calcium channels.

This, and other factors, leads them to conclude that:

even with fewer companies overall involved in agrochemical discovery, innovation continues, as demonstrated by the continued introduction of new classes of agrochemicals with new MoAs.

Sparks, Hahn, and Garizi make a similar point. They stress in particular that the discovery of natural products (NPs) which are the “output of nature’s chemical laboratory” is today a main driver of crop protection research. According to them:

NPs provide very significant value in identifying new MoAs, with 60% of all agrochemical MoAs being, or could have been, defined by a NP. This information again points to the importance of NPs in agrochemical discovery, since new MoAs remain a top priority for new agrochemicals.

More generally, the point is not that Arrow’s cannibalization theory is wrong. Arrow’s work convincingly explains monopolists’ low incentives to invest in substitute invention. Instead, the point is that Arrow’s cannibalization theory is narrower than often assumed in the antitrust policy literature. Admittedly, Arrow’s cannibalization theory is relevant in industries primarily driven by a process of cumulative innovation. But it is much less helpful to understand the incentives of a monopolist in industries subject to technological change. As a result of this, the first question that should guide an antitrust agency investigation is empirical in nature: is the industry under consideration one driven by cumulative innovation, or one where technology disruption, shocks, and serendipity incentivize drastic innovation?

Note that exogenous factors beyond technological frontiers also promote drastic innovation. This point ought not to be overlooked. A sizeable amount of the specialist scientific literature stresses the powerful innovation incentives created by changing dietary habits, new diseases (e.g. the Zika virus), global population growth, and environmental challenges like climate change and weather extremes. In 2015, Jeschke noted:

In spite of the significant consolidation of the agrochemical companies, modern agricultural chemistry is vital and will have the opportunity to shape the future of agriculture by continuing to deliver further innovative integrated solutions. 

Words of wisdom caution for antitrust agencies tasked with the complex mission of reviewing mergers in the ag-biotech industry?

Shubha Ghosh is Crandall Melvin Professor of Law and Director of the Technology Commercialization Law Program at Syracuse University College of Law

How should patents be taken into consideration in merger analysis? When does the combining of patent portfolios lead to anticompetitive concerns? Two principles should guide these inquiries. First, as the Supreme Court held in its 2006 decision Independent Ink, ownership of a patent does not confer market power. This ruling came in the context of a tying claim, but it is generalizable. While ownership of a patent can provide advantages in the market, such as access to techniques that are more effective than what is available to a competitor or the ability to keep competitors from making desirable differentiations in existing products, ownership of a patent or patent portfolio does not per se confer market power. Competitors might have equally strong and broad patent portfolios. The power to limit price competition is possibly counterweighted by competition over technology and product quality.

A second principle about patents and markets, however, bespeaks more caution in antitrust analysis. Patents can create information problems while at the same time potentially resolving some externality problems arising from knowledge spillovers. Information problems arise because patents are not well-defined property rights with clear boundaries. While patents are granted to novel, nonobvious, useful, and concrete inventions (as opposed to abstract, disembodied ideas), it is far from clear when a patented invention is actually nonobvious. Patent rights extend to several possible embodiments of a novel, useful, and nonobvious conception. While in theory this problem could be solved by limiting patent rights to narrow embodiments, the net result would be increased uncertainty through patent thickets and divided ownership. Inventions do not come in readily discernible units or engineered metes and bounds (despite the rhetoric).

The information problems created by patents do not create traditional market power in the sense of having some control over the price charged to consumers, but they do impose costs on competitors that can give a patent owner some control over market entry and the market conditions confronting consumers. The Court’s perhaps sanguine decoupling of patents and market power in its 2006 decision has some valence in a market setting where patent rights are somewhat equally distributed among competitors. In such a setting, each firm faces the same uncertainties that arise from patents. However, if patent ownership is imbalanced among firms, competition authorities need to act with caution. The challenge is identifying an unbalanced patent position in the marketplace.

Mergers among patent-owning firms invite antitrust scrutiny for these reasons. Metrics of patent ownership focusing solely on the quantity of patents owned, adjusting for the number of claims, can offer a snapshot of ownership distribution. But patent numbers need to be connected to the costs of operating the firm. Patents can lower a firm’s costs, create a niche for a particular differentiated product, and give a firm a head start in the next generation of technologies. Mergers that lead to an increased concentration of patent ownership may raise eyebrows, but those that lead to significant increase in costs to competitors and create potential impediments to market entry require a response from competition authorities. This response could be a blocking of the merger or perhaps more practically, in most instances, a divestment of the patent portfolio through requirements of licensing. This last approach is particularly appropriate where the technologies at issue are analogous to standard essential patents in the standard setting with FRAND context.

Claims of synergies should, in many instances, be met with skepticism when the patent portfolios of the merging companies are combined. While the technologies may be complementary, yielding benefits that go beyond those arising from a cross-licensing arrangement, the integration of portfolios may serve to raise costs for potential rivals in the marketplace. These barriers to entry may arise even in the case of vertical integration when the firms internalize contracting costs for technology transfer through ownership. Vertical integration of patent portfolios may raise costs for rivals both at the manufacturing and the distribution levels.

These ideas are set forth as propositions to be tested, but also general policy guidance for merger review involving companies with substantial patent portfolios. The ChemChina-Syngenta merger perhaps opens up global markets, but may likely impose barriers for companies in the agriculture market. The Bayer-Monsanto and Dow-DuPont mergers have questionable synergies. Even if potential synergies, these projected benefits need to be weighed against the very identifiable sources for market foreclosure. While patents may not create market power per se, according to the Supreme Court, the potential for mischief should not be underestimated.