This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.
Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs).
However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.
These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.
Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.
The many benefits of patent protection
While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.
By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.
This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.
The shift away from injunctive relief
Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs).
However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.
The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.
Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits.
But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far.
And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.
Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.
For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.
The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.
While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.
This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).
Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Tim Brennan, (Professor, Economics & Public Policy, University of Maryland; former FCC; former FTC).]
Observers on TOTM and elsewhere have pointed out the importance of preserving patent rights as pharmaceutical and biotechnology companies pursue development of treatments for, and better vaccines against, Covid-19. As the benefits of these treatments could reach into the trillions of dollars (see here for a casual estimate and here for a more serious one), it is hard to imagine a level of reward for successful innovations that is too high.
On the other hand, as these and other commentaries suggest if only implicitly, the high social value of a coronavirus treatment or vaccine may well lead to calls to limit the ability to profit from a patent. It is easy to imagine that a developer of a vaccine will not be able to charge the patent-protected price (note avoidance of the term “monopoly”). It almost certainly will not be able to do so if it cannot use price discrimination in order to allow those lacking the means to pay a uniform higher price to get the vaccine.
However, there is an alternative to patents that have not received much attention in the policy discussion—having the government (Treasury, NIH, CDC) offer a prize in exchange for open access to a successful vaccine or treatment. Prizes are not new; they go back at least to the early 18th century, when Britain offered a prize for improvements in clock accuracy to facilitate ocean-going navigation. Many prizes have been offered by the private sector, both for their own use—Netflix offering a prize for improvements to its movie recommendation algorithm—and to altruistically promote innovation. Charles Lindbergh’s 1927 first solo transatlantic flight, and previous attempts by others, were motivated at least in part by a $25,000 prize offered by a New York hotel owner.
In light of the net benefits of an improved vaccine, indicated perhaps by the level of spending in enacted and proposed stimulus and rescue programs, a prize of, oh, $25 billion is practically chump change. But would a prize make sense here?
I and two former colleagues at Resources for the Future, Molly Macauley and Kate Whitefoot, analyzed the use of prizes in comparison to patents and other methods to solicit and procure innovation. This work was inspired by Molly’s interest in NASA’s use of prizes to induce innovations in space exploration equipment. On the theory side, we were interested because models of patents typically treat patents as prizes—the successful innovator gets $X in expected profit—and thus were unable to explain why one might want to choose prizes rather than patents and vice versa.
When is a prize a “prize”?
The answer to this question requires being clear on what I mean by a prize. A familiar type of prize is the “best” of something, from first prize in the middle school science fair to the Academy Award for Best Picture. This is not the kind of prize I’m talking about with regard to coming up with a treatment for or vaccine against Covid-19. (George Mason’s Mercatus Center is offering prizes of this sort for things like $50,000 for “best coronavirus policy writing” to $500,000 for “best effort to find a treatment rapidly”; h/t to Geoff Manne.) Rather, it is a prize for being first to achieve a specific outcome, for example, a solo flight across the Atlantic Ocean.
A necessary component of such prizes is a winning condition, specified in advance. For example, the $10 million Ansari X Prize to promote commercial space travel was not awarded just for some general demonstration of feasibility that pleased a set of judges. Rather, it specifically went to the first team that could “carry three people 100 kilometers above the earth’s surface twice within two weeks.” Contestants knew what they had to do, and there was no dispute when the winner met the criterion for getting the prize.
Prizes or patents?
The need for a winning condition highlights one of the two main criteria affecting the choice of patents or prizes: advance knowledge of the specific goal. Economy-wide, the advantage of patents over prizes is that entrepreneurial innovators are rewarded for coming up with sufficiently novel products or processes of value. Knowledge regarding what is worth innovative effort is decentralized and often tacit. On the other hand, if a funder, including the government, knows what it wants sufficiently well that it can specify a winning condition, a prize can be sensible as a way to focus innovative effort toward that desired objective.
The second criterion for choosing between patents and prizes is more subtle. Someone undertaking research effort to come up with a patent bears two risks. The first is the risk that the effort will not be successful, not just overall but in being the first to be able to file for a patent. That risk is essentially shared by those pursuing a prize, where being first involves not filing for a patent but meeting the winning condition. However, patent seekers bear another risk, which is how much the patent will be worth if they win it. Prize seekers do not bear that risk, as the prize is specified in advance. (Economic models of patent activity tend to ignore this variation.) Thus, a prize may induce more risk-averse innovators to compete for the prize.
Assuming a winning condition for a Covid-19 treatment or vaccine can be specified in advance—I leave that to the medical people—our present public health dilemma could be well suited for a prize. As observed earlier, with both net benefits and already made public spending responses in the trillions of dollars, such a prize could and should be quite large. That may be a difficult to sell politically but, as also observed earlier, the government may not be able to commit credibly to allow a patent winner to exploit the treatment or vaccine’s economic value.
Design issues, TBD
If prizes become an appealing way to encourage Covid-19 mitigation innovations, a few design issues remain on the table.
One is whether to have intermediate prizes, with their own winning conditions, to narrow down the field of contestants to those with more promising approaches. One would need some sort of winning condition for this, of course. A second is whether the innovation will be achieved more quickly by allowing contestants to combine efforts. The virtues of competition may be outweighed by being able to hedge bets rather than risk being stuck going down a blind alley.
A third is whether to go with winner-take-all or have second or third prizes. One advantage of multiple prizes is that it can mitigate some risk to innovators, at a potential cost of reducing the effort to win. However, one could imagine here that someone other than the winner might come up with a treatment or vaccine that does better than the winner but was found after the winner met the condition. This leads to a fourth policy choice—should contestants, the winner or others, retain patents, even if the winning treatment of vaccine is freely licensed, to be made available at marginal cost.
All of these choices, along with the choice of whether to offer a prize and what that prize should be, are matters of medical and pharmaceutical judgment. But economics does highlight the potential advantages of a prize and suggest that it may deserve some attention as other policy judgments are being made.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]
The ongoing pandemic has been an opportunity to explore different aspects of the human condition. For myself, I have learned that, despite a deep commitment to philosophical (neo- or classical-) liberalism, at heart I am pragmatic. I would prefer a society that optimizes for more individual liberty, but I am emphatically not someone who would even entertain the idea of using crises to advance my agenda when it is not clearly in service to amelioration of immediate problems.
Sadly, I have also learned that there are those who are not similarly pragmatic, and are willing to advance their ideological agenda come hell or high water. In this regard, I was disappointed yesterday to see the Gurry IP/COVID Letter passing around Twitter calling for widespread, worldwide interference with the property rights of IPR holders.
The letter calls for a scattershot set of “remedies” to the crisis that would open access to copyright- and patent-protected inventions and content, including (among other things):
voluntary licensing and non-enforcement of IP;
abrogation of IPR by WIPO members using the “flexibility” in the international IP regime;
the removal of geographical restrictions on IP licenses;
forcing patents into COVID-19 patent pools; and
the implementation of compulsory licensing.
And, unlike many prior efforts to push the envelope on weakening IP protections, the Gurry Letter also calls for measures that would weaken trade secrets and expose confidential business information in order to “achieve universal and equitable access to COVID-19 medicines and medical technologies as soon as reasonably possible.”
Notably, nothing in the letter suggests that any of these measures should be regarded as temporary.
We all want treatments for infection, vaccines for prevention, and ample supply of personal protective equipment as soon as possible, but if all the demands in this letter were met, it would do little to increase the supply of any of these things in the short term, while undermining incentives to develop new treatments, vaccines and better preventative tools in the long run.
Fundamentally, the letter reflects a willingness to use the COVID-19 pandemic to pursue an agenda that lacks merit and would be dismissed in the normal course of affairs.
What is most certainly the case is that we need more innovation now, and we need it faster. There is no reason to believe that mandating open source status or forcing compulsory licensing on the firms doing that work will encourage that work to proceed with all due haste—and every indication that the opposite is the case.
Where there are short term shortages of certain products that might be produced in much larger quantities by relaxing IP, companies are responding by doing just that—voluntarily. But this is fundamentally different from the imposition of unlimited compulsory licenses.
Further, private actors have displayed an impressive willingness to provide free or low cost access to technologies and content—without government coercion. The following is a short list of some of the content and inventions that have been opened up:
Culture, Fitness & Entertainment
“HBO Will Stream 500 Hours of Free Programming, Including Full Seasons of ‘Veep,’ ‘The Sopranos,’ ‘Silicon Valley’”
Dozens (or more) of artists, both famous and lesser known, are releasing free back catalog performances or are taking part in free live streaming sessions on social media platforms. Notably, viewers are often welcome to donate or “pay what they” want to help support these artists (more on this below).
The NBA, NFL, and NHL are offering free access to their back catalogue of games.
A large array of music production software can now be used free on extended trials for 3 months (or completely free and unlimited in some cases).
Medtronic published “design specifications for the Puritan Bennett 560 (PB560) to allow innovators, inventors, start-ups, and academic institutions to leverage their own expertise and resources to evaluate options for rapid ventilator manufacturing.” It additionally provided software licenses for this technology.
AbbVie announced it won’t enforce its patent rights for Kaletra—a drug that may provide treatment for COVID-19 infections. Israel had earlier indicated it would impose compulsory licenses for the drug, but AbbVie is allowing use worldwide. The company, moreover, had donated supplies of the drug to China earlier in the year when the outbreak first became apparent.
“Cisco has extended free licenses and expanded usage counts at no extra charge for three of its security technologies to help strained IT teams and partners ready themselves and their clients for remote work.”
Zoom expanded its free access and other limitations for educational institutions around the world.
Incentivize innovation, now more than ever
In addition to undermining the short-term incentives to draw more research resources into the fight against COVID-19, using this crisis to weaken the IP regime will cause long-term damage to the economies of the world. We still will need creators making new cultural products and researchers developing new medicines and technologies; weakening the IP regime will undermine the delicate set of incentives that cultural and scientific production depends upon.
Any clear-eyed assessment of the broader course of the pandemic and the response to it gives lie to the notion that IP rights are oppressive or counterproductive. It is the pharmaceutical industry—hated as they may be in some quarters—that will be able to marshall the resources and expertise to develop treatments and vaccines. And it is artists and educators producing cultural content who (theoretically) depend on the licensing revenues of their creations for survival.
In fact, one of the things that the pandemic has exposed is the fragility of artists’ livelihoods and the callousness with which they are often treated. Shortly after the lockdowns began in the US, the well-established rock musician David Crosby said in an interview that, if he could not tour this year, he would face tremendous financial hardship.
As unfortunate as that may be for Crosby, a world-famous musician, imagine how much harder it is for struggling musicians who can hardly hope to achieve a fraction of Crosby’s success for their own tours, let alone for licensing. If David Crosby cannot manage well for a few months on the revenue from his popular catalog, what hope do small artists have?
Indeed, the flood of unable-to-tour artists who are currently offering “donate what you can” streaming performances are a symptom of the destructive assault on IPR exemplified in the letter. For decades, these artists have been told that they can only legitimately make money through touring. Although the potential to actually make a living while touring is possibly out of reach for many or most artists, those that had been scraping by have now been brought to the brink of ruin as the ability to tour is taken away.
There are certainly ways the various IP regimes can be improved (like, for instance, figuring out how to help creators make a living from their creations), but now is not the time to implement wishlist changes to an otherwise broadly successful rights regime.
And, critically, there is a massive difference between achieving wider distribution of intellectual property voluntarily as opposed to through government fiat. When done voluntarily the IP owner determines the contours and extent of “open sourcing” so she can tailor increased access to her own needs (including the need to eat and pay rent). In some cases this may mean providing unlimited, completely free access, but in other cases—where the particular inventor or creator has a different set of needs and priorities—it may be something less than completely open access. When a rightsholder opts to “open source” her property voluntarily, she still retains the right to govern future use (i.e. once the pandemic is over) and is able to plan for reductions in revenue and how to manage future return on investment.
Our lawmakers can consider if a particular situation arises where a particular piece of property is required for the public good, should the need arise. Otherwise, as responsible individuals, we should restrain ourselves from trying to capitalize on the current crisis to ram through our policy preferences.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Daniel Takash,(Regulatory policy fellow at the Niskanen Center. He is the manager of Niskanen’s Captured Economy Project, https://capturedeconomy.com, and you can follow him @danieltakash or @capturedecon).]
The pharmaceutical industry should be one of the most well-regarded industries in America. It helps bring drugs to market that improve, and often save, people’s lives. Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular– trailing behind fossil fuels, lawyers, and even the federal government. The opioid crisis dominated the headlines for the past few years, but the high price of drugs is a top-of-mind issue that generates significant animosity toward the pharmaceutical industry. The effects of high drug prices are felt not just at every trip to the pharmacy, but also by those who are priced out of life-saving treatments. Many Americans simply can’t afford what their doctors prescribe. The pharmaceutical industry helps save lives, but it’s also been credibly accused of anticompetitive behavior–not just from generics, but even other brand manufacturers.
These extraordinary times are an opportunity to right the ship. AbbVie, roundly criticized for building a patent thicket around Humira, has donated its patent rights to a promising COVID-19 treatment. This is to be celebrated– yet pharma’s bad reputation is defined by its worst behaviors and the frequent apologetics for overusing the patent system. Hopefully corporate social responsibility will prevail, and such abuses will cease in the future.
The most effective long-term treatment for COVID-19 will be a vaccine. We also need drugs to treat those afflicted with COVID-19 to improve recovery and lower mortality rates for those that get sick before a vaccine is developed and widely available. This requires rapid drug development through effective public-private partnerships to bring these treatments to market.
Without a doubt, these solutions will come from the pharmaceutical industry. Increased funding for the National Institutes for Health, nonprofit research institutions, and private pharmaceutical researchers are likely needed to help accelerate the development of these treatments. But we must be careful to ensure whatever necessary upfront public support is given to these entities results in a fair trade-off for Americans. The U.S. taxpayer is one of the largest investors in early to mid-stage drug research, and we need to make sure that we are a good investor.
Basic research into the costs of drug development, especially when taxpayer subsidies are involved, is a necessary start. This is a feature of the We PAID Act, introduced by Senators Rick Scott (R-FL) and Chris Van Hollen (D-MD), which requires the Department of Health and Human Services to enter into a contract with the National Academy of Medicine to figure the reasonable price of drugs developed with taxpayer support. This reasonable price would include a suitable reward to the private companies that did the important work of finishing drug development and gaining FDA approval. This is important, as setting a price too low would reduce investments in indispensable research and development. But this must be balanced with the risk of using patents to charge prices above and beyond those necessary to finance research, development, and commercialization.
A little sunshine can go a long way. We should trust that pharmaceutical companies will develop a vaccine and treatments or coronavirus, but we must also verify these are affordable and accessible through public scrutiny. Take the drug manufacturer Gilead Science’s about-face on its application for orphan drug status on the possible COVID-19 treatment remdesivir. Remedesivir, developed in part with public funds and already covered by three Gilead patents, technically satisfied the definition of “orphan drug,” as COVID-19 (at the time of the application) afflicted fewer than 200,000 patents. In a pandemic that could infect tens of millions of Americans, this designation is obviously absurd, and public outcry led to Gilead to ask the FDA to rescind the application. Gilead claimed it sought the designation to speed up FDA review, and that might be true. Regardless, public attention meant that the FDA will give Gilead’s drug Remdesivir expedited review without Gilead needing a designation that looks unfair to the American people.
The success of this isolated effort is absolutely worth celebrating. But we need more research to better comprehend the pharmaceutical industry’s needs, and this is just what the study provisions of We PAID would provide.
But a thorough analysis provided under We PAID is the best way for us to fully understand just how much support the pharmaceutical industry needs, and just how successful it has been thus far. The NIH, one of the major sources of publicly funded research, invests about $41.7 billion annually in medical research. We need to better understand how these efforts link up, and how the torch is passed from public to private efforts.
Patents are essential to the functioning of the pharmaceutical industry by incentivizing drug development through temporary periods of exclusivity. But it is equally essential, in light of the considerable investment already made by taxpayers in drug research and development, to make sure we understand the effects of these incentives and calibrate them to balance the interests of patients and pharmaceutical companies. Most drugs require research funding from both public and private sources as well as patent protection. And the U.S. is one of the biggest investors of drug research worldwide (even compared to drug companies), yet Americans pay the highest prices in the world. Are these prices justified, and can we improve patent policy to bring these costs down without harming innovation?
Beyond a thorough analysis of drug pricing, what makes We PAID one of the most promising solutions to the problem of excessively high drug prices are the accountability mechanisms included. The bill, if made law, would establish a Drug Access and Affordability Committee. The Committee would use the methodology from the joint HHS and NAM study to determine a reasonable price for affected drugs (around 20 percent of drugs currently on the market, if the bill were law today). Any companies that price drugs granted exclusivity by a patent above the reasonable price would lose their exclusivity.
This may seem like a price control at first blush, but it isn’t–for two reasons. First, this only applies to drugs developed with taxpayer dollars, which any COVID-19 treatments or cures almost certainly would be considering the $785 million spent by the NIH since 2002 researching coronaviruses. It’s an accountability mechanism that would ensure the government is getting its money’s worth. This tool is akin to ensuring that a government contractor is not charging more than would be reasonable, lest it loses its contract.
Second, it is even less stringent than pulling a contract with a private firm overcharging the government for the services provided. Why? Losing a patent does not mean losing the ability to make a drug, or any other patented invention for that matter.This basic fact is often lost in the patent debate, but it cannot be stressed enough.
If patents functioned as licenses, then every patent expiration would mean another product going off the market. In reality, that means that any other firm can compete and use the patented design. Even if a firm violated the price regulations included in the bill and lost its patent, it could continue manufacturing the drug. And so could any other firm, bringing down prices for all consumers by opening up market competition.
The We PAID Act could be a dramatic change for the drug industry, and because of that many in Congress may want to first debate the particulars of the bill. This is fine, assuming this promising legislation isn’t watered down beyond recognition. But any objections to the Drug Affordability and Access Committee and reasonable pricing regulations aren’t an excuse to not, at a bare minimum, pass the study included in the bill as part of future coronavirus packages, if not sooner. It is an inexpensive way to get good information in a single, reputable source that would allow us to shape good policy.
Good information is needed for good policy. When the government lays the groundwork for future innovations by financing research and development, it can be compared to a venture capitalist providing the financing necessary for an innovative product or service. But just like in the private sector, the government should know what it’s getting for its (read: taxpayers’) money and make recipients of such funding accountable to investors.
The COVID-19 outbreak will be the most pressing issue for the foreseeable future, but determining how pharmaceuticals developed with public research are priced is necessary in good times and bad. The final prices for these important drugs might be fair, but the public will never know without a trusted source examining this information. Trust, but verify. The pharmaceutical industry’s efforts in fighting the COVID-19 pandemic might be the first step to improving Americans’ relationship with the industry. But we need good information to make that happen. Americans need to know when they are being treated fairly, and that policymakers are able to protect them when they are treated unfairly. The government needs to become a better-informed investor, and that won’t happen without something like the We PAID Act.
[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]
In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.
Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here).
In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.
Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.
The elephant in the room
The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder).
At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).
Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.
The misguided push for component level pricing
The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.
From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that inTCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings.
More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:
Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.
While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.
Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).
One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.
A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.
In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.
Prices are almost impossible to reconstruct
Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA.
For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:
Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.
Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting was unlikely to provide a satisfying answer.
Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:
Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.
As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.
For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).
In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:
Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.
The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.
In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:
Nothing is more alien to antitrust than enquiring into the reasonableness of prices.
This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:
If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.
An important but unheralded announcement was made on October 10, 2018: The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) released a draft CEN CENELAC Workshop Agreement (CWA) on the licensing of Standard Essential Patents (SEPs) for 5G/Internet of Things (IoT) applications. The final agreement, due to be published in early 2019, is likely to have significant implications for the development and roll-out of both 5G and IoT applications.
CEN and CENELAC, which along with the European Telecommunications Standards Institute (ETSI) are the officially recognized standard setting bodies in Europe, are private international non profit organizations with a widespread network consisting of technical experts from industry, public administrations, associations, academia and societal organizations. This first Workshop brought together representatives of the 5G/Internet of Things (IoT) technology user and provider communities to discuss licensing best practices and recommendations for a code of conduct for licensing of SEPs. The aim was to produce a CWA that reflects and balances the needs of both communities.
The final consensus outcome of the Workshop will be published as a CEN-CENELEC Workshop Agreement (CWA). The draft, which is available for public comments, comprises principles and guidelines that prepare a foundation for future licensing of standard essential patents for fifth generation (5G) technologies. The draft also contains a section on Q&A to help aid new implementers and patent holders.
The IoT ecosystem is likely to have over 20 billion interconnected devices by 2020 and represent a market of $17 trillion (about the same as the current GDP of the U.S.). The data collected by one device, such as a smart thermostat that learns what time the consumer is likely to be at home, can be used to increase the performance of another connected device, such as a smart fridge. Cellular technologies are a core component of the IoT ecosystem, alongside applications, devices, software etc., as they provide connectivity within the IoT system. 5G technology, in particular, is expected to play a key role in complex IoT deployments, which will transcend the usage of cellular networks from smart phones to smart home appliances, autonomous vehicles, health care facilities etc. in what has been aptly described as the fourth industrial revolution.
Indeed, the role of 5G to IoT is so significant that the proposed $117 billion takeover bid for U.S. tech giant Qualcomm by Singapore-based Broadcom was blocked by President Trump, citing national security concerns. (A letter sent by the Committee on Foreign Investment in the US suggested that Broadcom might starve Qualcomm of investment, preventing it from competing effectively against foreign competitors–implicitly those in China.)
While commercial roll-out of 5G technology has not yet fully begun, several efforts are being made by innovator companies, standard setting bodies and governments to maximize the benefits from such deployment.
The draft CWA Guidelines (hereinafter “the guidelines”) are consistent with some of the recent jurisprudence on SEPs on various issues. While there is relatively less guidance specifically in relation to 5G SEPs, it provides clarifications on several aspects of SEP licensing which will be useful, particularly, the negotiating process and conduct of both parties.
The guidelines contain 6 principles followed by some questions pertaining to SEP licensing. The principles deal with:
The obligation of SEP holders to license the SEPs on Fair, Reasonable and Non-Discriminatory (FRAND) terms;
The obligation on both parties to conduct negotiations in good faith;
The obligation of both parties to provide necessary information (subject to confidentiality) to facilitate timely conclusion of the licensing negotiation;
Compensation that is “fair and reasonable” and achieves the right balance between incentives to contribute technology and the cost of accessing that technology;
A non-discriminatory obligation on the SEP holder for similarly situated licensees even though they don’t need to be identical; and
Recourse to a third party FRAND determination either by court or arbitration if the negotiations fail to conclude in a timely manner.
There are 22 questions and answers, as well, which define basic terms and touch on issues such as: what amounts as good faith conduct of negotiating parties, global portfolio licensing, FRAND royalty rates, patent pooling, dispute resolution, injunctions, and other issues relevant to FRAND licensing policy in general.
Below are some significant contributions that the draft report makes on issues such as the supply chain level at which licensing is best done, treatment of small and medium enterprises (SMEs), non disclosure agreements, good faith negotiations and alternative dispute resolution.
Typically in the IoT ecosystem, many technologies will be adopted of which several will be standardized. The guidelines offer help to product and service developers in this regard and suggest that one may need to obtain licenses from SEP owners for product or services incorporating communications technology like 3G UMTS, 4G LTE, Wi-Fi, NB-IoT, 31 Cat-M or video codecs such as H.264. The guidelines, however, clarify that with the deployment of IoT, licenses for several other standards may be needed and developers should be mindful of these complexities when starting out in order to avoid potential infringements.
Notably, the guidelines suggest that in order to simplify licensing, reduce costs for all parties and maintain a level playing field between licensees, SEP holders should license at one level. While this may vary between different industries, for communications technology, the licensing point is often at the end-user equipment level. There has been a fair bit of debate on this issue and the recent order by Judge Koh granting FTC’s partial summary motion deals with some of this.
In the judgment delivered on November 6, Judge Koh relied primarily on the 9th circuit decisions in Microsoft v Motorola (2012 and 2015) to rule on the core issue of the scope of the FRAND commitments–specifically on the question of whether licensing extends to all levels or is confined to the end device level. The court interpreted the pro- competitive principles behind the non-discrimination requirement to mean that such commitments are “sweeping” and essentially that an SEP holder has to license to anyone willing to offer a FRAND rate globally. It also cited Ericsson v D-Link, where the Federal Circuit held that “compliant devices necessarily infringe certain claims in patents that cover technology incorporated into the standard and so practice of the standard is impossible without licenses to all incorporated SEP technology.”
The guidelines speak about the importance of non-disclosure agreements (NDAs) in such licensing agreements given that some of the information exchanged between parties during negotiation, such as claim charts etc., may be sensitive and confidential. Therefore, an undue delay in agreeing to an NDA, without well-founded reasons, might be taken as evidence of a lack of good faith in negotiations rendering such a licensee as unwilling.
They also provide quite a boost for small and medium enterprises (SMEs) in licensing negotiations by addressing the duty of SEP owners to be mindful of SMEs that may be less experienced and therefore lack information from which to draw assurance that proposed terms are FRAND. The guidelines provide that SEP owners should provide whatever information they can under NDA to help the negotiation process. Equally, the same obligation applies on a licensee who is more experienced in dealing with a SEP owner who is an SME.
There is some clarity on time frames for negotiations and the guidelines provide a maximum time that parties should take to respond to offers and counter offers, which could extend up to several months in complex cases involving hundreds of patents. The guidelines also prescribe conduct of potential licensees on receiving an offer and how to make counter-offers in a timely manner.
Furthermore, the guidelines lay down the various ways in which royalty rates may be structured and clarify that there is no one fixed way in which this may be done. Similarly, they offer myriad ways in which potential licensees may be able to determine for themselves if the rates offered to them are fair and reasonable, such as third party patent landscape reports, public announcements, expert advice etc.
Finally, in the case that a negotiation reaches an impasse, the guidelines endorse an alternative dispute mechanism such as mediation or arbitration for the parties to resolve the issue. Bodies such as International Chamber of Commerce and World Intellectual Property Organization may provide useful platforms in this regard.
Almost 20 years have passed since technology pioneer Kevin Ashton first coined the phrase Internet of Things. While companies are gearing up to participate in the market of IoT, regulation and policy in the IoT world seems far from a predictable framework to follow. There are a lot of guesses about how rules and standards are likely to shape up, with little or no guidance for companies on how to prepare themselves for what faces them very soon. Therefore concrete efforts such as these are rather welcome. The draft guidelines do attempt to offer some much needed clarity and are now open for public comments due by December 13. It will be good to see what the final CWA report on licensing of SEPs for 5G and IoT looks like.
The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”
However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.
Limitations of the FTC Study
Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”
One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”
Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”
Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”
Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.
FTC Commissioners Put Cart Before Horse
The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.
In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”
This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”
Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.
It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.
Commentators Jump the Gun
Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.
For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”
Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”
The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”
Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.
While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.
Last Thursday, Elon Musk, the founder and CEO of Tesla Motors, issued an announcement on the company’s blog with a catchy title: “All Our Patent Are Belong to You.” Commentary in social media and on blogs, as well as in traditional newspapers, jumped to the conclusion that Tesla is abandoning its patents and making them “freely” available to the public for whomever wants to use them. As with all things involving patented innovation these days, the reality of Tesla’s new patent policy does not match the PR spin or the buzz on the Internet.
The reality is that Tesla is not disclaiming its patent rights, despite Musk’s title to his announcement or his invocation in his announcement of the tread-worn cliché today that patents impede innovation. In fact, Tesla’s new policy is an example of Musk exercising patent rights, not abandoning them.
If you’re not puzzled by Tesla’s announcement, you should be. This is because patents are a type of property right that secures the exclusive rights to make, use, or sell an invention for a limited period of time. These rights do not come cheap — inventions cost time, effort, and money to create and companies like Tesla then exploit these property rights in spending even more time, effort and money in converting inventions into viable commercial products and services sold in the marketplace. Thus, if Tesla’s intention is to make its ideas available for public use, why, one may wonder, did it bother to expend the tremendous resources in acquiring the patents in the first place?
The key to understanding this important question lies in a single phrase in Musk’s announcement that almost everyone has failed to notice: “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” (emphasis added)
What does “in good faith” mean in this context? Fortunately, one intrepid reporter at the L.A. Times asked this question, and the answer from Musk makes clear that this new policy is not an abandonment of patent rights in favor of some fuzzy notion of the public domain, but rather it’s an exercise of his company’s patent rights: “Tesla will allow other manufacturers to use its patents in “good faith” – essentially barring those users from filing patent-infringement lawsuits against [Tesla] or trying to produce knockoffs of Tesla’s cars.” In the legalese known to patent lawyers and inventors the world over, this is not an abandonment of Tesla’s patents, this is what is known as a cross license.
In plain English, here’s the deal that Tesla is offering to manufacturers and users of its electrical car technology: in exchange for using Tesla’s patents, the users of Tesla’s patents cannot file patent infringement lawsuits against Tesla if Tesla uses their other patents. In other words, this is a classic deal made between businesses all of the time — you can use my property and I can use your property, and we cannot sue each other. It’s a similar deal to that made between two neighbors who agree to permit each other to cross each other’s backyard. In the context of patented innovation, this agreement is more complicated, but it is in principle the same thing: if automobile manufacturer X decides to use Tesla’s patents, and Tesla begins infringing X’s patents on other technology, then X has agreed through its prior use of Tesla’s patents that it cannot sue Tesla. Thus, each party has licensed the other to make, use and sell their respective patented technologies; in patent law parlance, it’s a “cross license.”
The only thing unique about this cross licensing offer is that Tesla publicly announced it as an open offer for anyone willing to accept it. This is not a patent “free for all,” and it certainly is not tantamount to Tesla “taking down the patent wall.” These are catchy sound bites, but they in fact obfuscate the clear business-minded nature of this commercial decision.
For anyone perhaps still doubting what is happening here, the same L.A Times story further confirms that Tesla is not abandoning the patent system. As stated to the reporter: “Tesla will continue to seek patents for its new technology to prevent others from poaching its advancements.” So much for the much ballyhooed pronouncements last week of how Tesla’s new patent (licensing) policy “reminds us of the urgent need for patent reform”! Musk clearly believes that the patent system is working just great for the new technological innovation his engineers are creating at Tesla right now.
For those working in the innovation industries, Tesla’s decision to cross license its old patents makes sense. Tesla Motors has already extracted much of the value from these old patents: Musk was able to secure venture capital funding for his startup company and he was able to secure for Tesla a dominant position in the electrical car market through his exclusive use of this patented innovation. (Venture capitalists consistently rely on patents in making investment decisions, and for anyone who doubts this need to watch only a few episodes of SharkTank.) Now that everyone associates radical, cutting-edge innovation with Tesla, Musk can shift in his strategic use of his company’s assets, including his intellectual property rights, such as relying more heavily on the goodwill associated with the Tesla trademark. This is clear, for instance, from the statement to the LA Times that companies or individuals agreeing to the “good faith” terms of Tesla’s license agree not to make “knockoffs of Tesla’s cars.”
There are other equally important commercial reasons for Tesla adopting its new cross-licensing policy, but the point has been made. Tesla’s new cross-licensing policy for its old patents is not Musk embracing “the open source philosophy” (as he asserts in his announcement). This may make good PR given the overheated rhetoric today about the so-called “broken patent system,” but it’s time people recognize the difference between PR and a reasonable business decision that reflects a company that has used (old) patents to acquire a dominant market position and is now changing its business model given these successful developments.
At a minimum, people should recognize that Tesla is not declaring that it will not bring patent infringement lawsuits, but only that it will not sue people with whom it has licensed its patented innovation. This is not, contrary to one law professor’s statement, a company “refrain[ing] from exercising their patent rights to the fullest extent of the law.” In licensing its patented technology, Tesla is in fact exercising its patent rights to the fullest extent of the law, and that is exactly what the patent system promotes in the myriad business models and innovative
The Federalist Society has started a new program, The Executive Branch Review, which focuses on the myriad fields in which the Executive Branch acts outside of the constitutional and legal limits imposed on it, either by Executive Orders or by the plethora of semi-independent administrative agencies’ regulatory actions.
I recently posted on the Federal Trade Commission’s (FTC) ongoing investigations into the patent licensing business model and the actions (“consent decrees”) taken by the FTC against Bosch and Google. These “consent decrees” constrain Bosch’s and Google’s rights in enforce patents they have committed to standard setting organizations (these patents are called “standard essential patents”). Here’s a brief taste:
One of the most prominent participants at the FTC-DOJ workshop back in December, former DOJ antitrust official and UC-Berkeley economics professor Carl Shapiro, explained in his opening speech that there was still insufficient data on patent licensing companies and their effects on the market. This is true; for instance, a prominent study cited by Google et al. in support of their request to the FTC to investigate patent licensing companies has been described as being fundamentally flawed on both substantive and methodological grounds. Even more important, Professor Shapiro expressed skepticism at the workshop that, even if there was properly acquired, valid data, the FTC lacked the legal authority to sanction patent licensing firms for being allegedly anti-competitive.
Commentators have long noted that courts and agencies have a lousy historical track record when it comes to assessing the merits of new innovation, whether in new products or new business models. They maintain that the FTC should not continue such mistakes by letting its decision-making today be driven by rhetoric or by the widespread animus against certain commercial firms. Restraint and fact-gathering, institutional virtues reflected in a government animated by the rule of law and respect for individual rights, are key to preventing regulatory overreach and harm to future innovation.
Go read the whole thing, and, while you’re at it, check out Commissioner Joshua Wright’s similar comments on the FTC’s investigations of patent licensing companies, which the FTC calls “patent assertion entities.”
In A Line in the Sand on the Calls for New Patent Legislation, Mr. Sobon responds to the heavy-handed rhetoric and emotionalism that dominates the debate today over patent licensing and litigation. He calls for a return to the real first principles of the patent system in discussions about patent licensing, as well as for more measured thinking and analysis about the costs of uncertainty created by never-ending systemic changes from legislation produced by heavy lobbying by interested parties. Here’s a small taste:
One genius of our patent system has been an implicit recognition that since its underlying subject matter, innovation, remains by definition in constant flux, the scaffolding of our system and the ability of all stakeholders to make reasonably consistent, prudent and socially efficient choices, should remain as stable as possible. But now these latest moves, demanding yet further significant changes to our patent laws, threaten that stability. And it is in fact systemic instability, from whatever source, that allows the very parasitic behaviors we have termed “troll”-like, to flourish.
It is silly and blindly ahistoric to lump anyone who seeks to license or enforce a patent right, but who does not themselves make a corresponding product, as a “troll.”
Read the whole thing here. Mr. Sobon’s essay reflects similar concerns expressed by Commissioner Joshua Wright this past April on the Federal Trade Commission’s investigation of what the FTC identifies as “patent assertion entities.”
Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A on Google+ that “our efforts at patent reform only went about halfway to where we need to go” and that he would like “to see if we can build some additional consensus on smarter patent laws.” So, Reps. DeFazio and Chaffetz introduced on March 1 the Saving High-tech Innovators from Egregious Legal Disputes (SHIELD) Act, which creates a “losing plaintiff patent-owner pays” litigation system for a single type of patent owner—patent licensing companies that purchase and license patents in the marketplace (and who sue infringers when infringers refuse their requests to license). To Google, to Representative DeFazio, and to others, these patent licensing companies are “patent trolls” who are destroyers of all things good—and the SHIELD Act will save us all from these dastardly “trolls” (is a troll anything but dastardly?).
As I and other scholars have pointed out, the “patent troll” moniker is really just a rhetorical epithet that lacks even an agreed-upon definition. The term is used loosely enough that it sometimes covers and sometimes excludes universities, Thomas Edison, Elias Howe (the inventor of the lockstitch in 1843), Charles Goodyear (the inventor of vulcanized rubber in 1839), and even companies like IBM. How can we be expected to have a reasonable discussion about patent policy when our basic terms of public discourse shift in meaning from blog to blog, article to article, speaker to speaker? The same is true of the new term, “Patent Assertion Entities,” which sounds more neutral, but has the same problem in that it also lacks any objective definition or usage.
Setting aside this basic problem of terminology for the moment, the SHIELD Act is anything but a “smarter patent law” (to quote President Obama). Some patent scholars, like Michael Risch, have begun to point out some of the serious problems with the SHIELD Act, such as its selectively discriminatory treatment of certain types of patent-owners. Moreover, as Professor Risch ably identifies, this legislation was so cleverly drafted to cover only a limited set of a specific type of patent-owner that it ended up being too clever. Unlike the previous version introduced last year, the 2013 SHIELD Act does not even apply to the flavor-of-the-day outrage over patent licensing companies—the owner of the podcast patent. (Although you wouldn’t know this if you read the supporters of the SHIELD Act like the EFF who falsely claim that this law will stop patent-owners like the podcast patent-owning company.)
There are many things wrong with the SHIELD Act, but one thing that I want to highlight here is that it based on a falsehood: the oft-repeated claim that two Boston University researchers have proven in a study that “patent troll suits cost American technology companies over $29 billion in 2011 alone.” This is what Rep. DeFazio said when he introduced the SHIELD Act on March 1. This claim was repeated yesterday by House Members during a hearing on “Abusive Patent Litigation.” The claim that patent licensing companies cost American tech companies $29 billion in a single year (2011) has become gospel since this study, The Direct Costs from NPE Disputes, was released last summer on the Internet. (Another name of patent licensing companies is “Non Practicing Entity” or “NPE.”) A Google search of “patent troll 29 billion” produces 191,000 hits. A Google search of “NPE 29 billion” produces 605,000 hits. Such is the making of conventional wisdom.
The problem with conventional wisdom is that it is usually incorrect, and the study that produced the claim of “$29 billion imposed by patent trolls” is no different. The $29 billion cost study is deeply and fundamentally flawed, as explained by two noted professors, David Schwartz and Jay Kesan, who are also highly regarded for their empirical and economic work in patent law. In their essay, Analyzing the Role of Non-Practicing Entities in the Patent System, also released late last summer, they detailed at great length serious methodological and substantive flaws in The Direct Costs from NPE Disputes. Unfortunately, the Schwartz and Kesan essay has gone virtually unnoticed in the patent policy debates, while the $29 billion cost claim has through repetition become truth.
In the hope that at least a few more people might discover the Schwartz and Kesan essay, I will briefly summarize some of their concerns about the study that produced the $29 billion cost figure. This is not merely an academic exercise. Since Rep. DeFazio explicitly relied on the $29 billion cost claim to justify the SHIELD Act, and he and others keep repeating it, it’s important to know if it is true, because it’s being used to drive proposed legislation in the real world. If patent legislation is supposed to secure innovation, then it behooves us to know if this legislation is based on actual facts. Yet, as Schwartz and Kesan explain in their essay, the $29 billion cost claim is based on a study that is fundamentally flawed in both substance and methodology.
In terms of its methodological flaws, the study supporting the $29 billion cost claim employs an incredibly broad definition of “patent troll” that covers almost every person, corporation or university that sues someone for infringing a patent that it is not currently being used to manufacture a product at that moment. While the meaning of the “patent troll” epithet shifts depending on the commentator, reporter, blogger, or scholar who is using it, one would be extremely hard pressed to find anyone embracing this expansive usage in patent scholarship or similar commentary today.
There are several reasons why the extremely broad definition of “NPE” or “patent troll” in the study is unusual even compared to uses of this term in other commentary or studies. First, and most absurdly, this definition, by necessity, includes every universityin the world that sues someone for infringing one of its patents, as universities don’t manufacture goods. Second, it includes every individual and start-up company who plans to manufacture a patented invention, but is forced to sue an infringer-competitor who thwarted these business plans by its infringing sales in the marketplace. Third, it includes commercial firms throughout the wide-ranging innovation industries—from high tech to biotech to traditional manufacturing—that have at least one patent among a portfolio of thousands that is not being used at the moment to manufacture a product because it may be “well outside the area in which they make products” and yet they sue infringers of this patent (the quoted language is from the study). So, according to this study, every manufacturer becomes an “NPE” or “patent troll” if it strays too far from what somebody subjectively defines as its rightful “area” of manufacturing. What company is not branded an “NPE” or “patent troll” under this definition, or will necessarily become one in the future given inevitable changes in one’s business plans or commercial activities? This is particularly true for every person or company whose only current opportunity to reap the benefit of their patented invention is to license the technology or to litigate against the infringers who refuse license offers.
So, when almost every possible patent-owning person, university, or corporation is defined as a “NPE” or “patent troll,” why are we surprised that a study that employs this virtually boundless definition concludes that they create $29 billion in litigation costs per year? The only thing surprising is that the number isn’t even higher!
There are many other methodological flaws in the $29 billion cost study, such as its explicit assumption that patent litigation costs are “too high” without providing any comparative baseline for this conclusion. What are the costs in other areas of litigation, such as standard commercial litigation, tort claims, or disputes over complex regulations? We are not told. What are the historical costs of patent litigation? We are not told. On what basis then can we conclude that $29 billion is “too high” or even “too low”? We’re supposed to be impressed by a number that exists in a vacuum and that lacks any empirical context by which to evaluate it.
The $29 billion cost study also assumes that all litigation transaction costs are deadweight losses, which would mean that the entire U.S. court system is a deadweight loss according to the terms of this study. Every lawsuit, whether a contract, tort, property, regulatory or constitutional dispute is, according to the assumption of the $29 billion cost study, a deadweight loss. The entire U.S. court system is an inefficient cost imposed on everyone who uses it. Really? That’s an assumption that reduces itself to absurdity—it’s a self-imposed reductio ad absurdum!
In addition to the methodological problems, there are also serious concerns about the trustworthiness and quality of the actual data used to reach the $29 billion claim in the study. All studies rely on data, and in this case, the $29 billion study used data from a secret survey done by RPX of its customers. For those who don’t know, RPX’s business model is to defend companies against these so-called “patent trolls.” So, a company whose business model is predicated on hyping the threat of “patent trolls” does a secret survey of its paying customers, and it is now known that RPX informed its customers in the survey that their answers would be used to lobby for changes in the patent laws.
As every reputable economist or statistician will tell you, such conditions encourage exaggeration and bias in a data sample by motivating participation among those who support changes to the patent law. Such a problem even has a formal name in economic studies: self-selection bias. But one doesn’t need to be an economist or statistician to be able to see the problems in relying on the RPX data to conclude that NPEs cost $29 billion per year. As the classic adage goes, “Something is rotten in the state of Denmark.”
Even worse, as I noted above, the RPX survey was confidential. RPX has continued to invoke “client confidences” in refusing to disclose its actual customer survey or the resulting data, which means that the data underlying the $29 billion claim is completely unknown and unverifiable for anyone who reads the study. Don’t worry, the researchers have told us in a footnote in the study, they looked at the data and confirmed it is good. Again, it doesn’t take economic or statistical training to know that something is not right here. Another classic cliché comes to mind at this point: “it’s not the crime, it’s the cover-up.”
In fact, keeping data secret in a published study violates well-established and longstanding norms in all scientific research that data should always be made available for testing and verification by third parties. No peer-reviewed medical or scientific journal would publish a study based on a secret data set in which the researchers have told us that we should simply trust them that the data is accurate. Its use of secret data probably explains why the $29 billion study has not yet appeared in a peer-reviewed journal, and, if economics has any claim to being an actual science, this study never will. If a study does not meet basic scientific standards for verifying data, then why are Reps. DeFazio and Chaffetz relying on it to propose national legislation that directly impacts the patent system and future innovation? If heads-in-the-clouds academics would know to reject such a study as based on unverifiable, likely biased claptrap, then why are our elected officials embracing it to create real-world legal rules?
And, to continue our running theme of classic clichés, there’s the rub. The more one looks at the actual legal requirements of the SHIELD Act, the more, in the words of Professor Risch, one is left “scratching one’s head” in bewilderment. The more one looks at the supporting studies and arguments in favor of the SHIELD Act, the more one is left, in the words of Professor Risch, “scratching one’s head.” The more and more one thinks about the SHIELD Act, the more one realizes what it is—legislation that has been crafted at the behest of the politically powerful (such as an Internet company who can get the President to do a special appearance on its own social media website) to have the government eliminate a smaller, publicly reviled, and less politically-connected group.
In short, people may have legitimate complaints about the ways in which the court system in the U.S. generally has problems. Commentators and Congresspersons could even consider revising the general legal rules governing patent ligtiation for all plaintiffs and defendants to make the ligitation system work better or more efficiently (by some established metric). Professor Risch has done exactly this in a recent Wired op-ed. But it’s time to call a spade a spade: the SHIELD Act is a classic example of rent-seeking, discriminatory legislation.