Archives For intellectual property

Last week the Senate Judiciary Committee held a hearing, Intellectual Property and the Price of Prescription Drugs: Balancing Innovation and Competition, that explored whether changes to the pharmaceutical patent process could help lower drug prices.  The committee’s goal was to evaluate various legislative proposals that might facilitate the entry of cheaper generic drugs, while also recognizing that strong patent rights for branded drugs are essential to incentivize drug innovation.  As Committee Chairman Lindsey Graham explained:

One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.

Several proposals that were discussed at the hearing have the potential to encourage competition in the pharmaceutical industry and help rein in drug prices. Below, I discuss these proposals, plus a few additional reforms. I also point out some of the language in the current draft proposals that goes a bit too far and threatens the ability of drug makers to remain innovative.  

1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies.  Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements.  The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples.  It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol.  Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition.  Increased generic competition should, in turn, reduce drug prices.

2. Restrict abuses of FDA Citizen Petitions.  The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval.  However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry.  Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses.  The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry.  However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA. 

3. Curtail Anticompetitive Pay-for-Delay Settlements.  The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials.  The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead.  As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients. 

However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers.  Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company.  A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place.  Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value.  Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.

4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers.  I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation.  Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process.  And because IPR proceedings do not have a standing requirement, the process has been exploited  by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices. 

The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.  By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers.  This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.

5. Curb illegal product hopping and patent thickets.  Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.”  At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug.  Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug.  The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.

However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor.  Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court.  Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court.  As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions.  Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation. 

Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices.  However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The first post, by Luke Froeb, Michael Doane & Mikhael Shor is here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

On March 14, the Federal Circuit will hear oral arguments in the case of BTG International v. Amneal Pharmaceuticals that could dramatically influence the future of duplicative patent litigation in the pharmaceutical industry.  The court will determine whether the America Invents Act (AIA) bars patent challengers that succeed in invalidating patents in inter partes review (IPR) proceedings from repeating their winning arguments in district court.  Courts and litigants had previously assumed that the AIA’s estoppel provision only prevented unsuccessful challengers from reusing failed arguments.   However, in an amicus brief filed in the case last month, the U.S. Patent and Trade Office (USPTO) argued that, although it seems counterintuitive, under the AIA, even parties that succeed in getting patents invalidated in IPR cannot reuse their arguments. 

If the Federal Circuit agrees with the USPTO, patent challengers could be strongly deterred from bringing IPR proceedings because it would mean they couldn’t reuse any arguments in district court.  This deterrent effect would be especially strong for generic drug makers, who must prevail in district court in order to get approval for their Abbreviated New Drug Application from the FDA. 

Critics of the USPTO’s position assert that it will frustrate the AIA’s purpose of facilitating generic competition.  However, if the Federal Circuit adopts the position, it would also reduce the amount of duplicative litigation that plagues the pharmaceutical industry and threatens new drug innovation.  According to a 2017 analysis of over 6,500 IPR challenges filed between 2012 and 2017, approximately 80% of IPR challenges were filed during an ongoing district court case challenging the patent.   This duplicative litigation can increase costs for both challengers and patent holders; the median cost for an IPR proceeding that results in a final decision is $500,000 and the median cost for just filing an IPR petition is $100,000.  Moreover, because of duplicative litigation, pharmaceutical patent holders face persistent uncertainty about the validity of their patents. Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

In addition, deterring IPR challenges would help to reestablish balance between drug patent owners and patent challengers.  As I’ve previously discussed here and here, the pro-challenger bias in IPR proceedings has led to significant deviation in patent invalidation rates under the two pathways; compared to district court challenges, patents are twice as likely to be found invalid in IPR challenges. The challenger is more likely to prevail in IPR proceedings because the Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts. Furthermore, if the challenger prevails in the IPR proceedings, the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision in favor of the patent holder.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision. 

However, the USPTO acknowledges that its position is counterintuitive because it means that a court could not consider invalidity arguments that the PTAB found persuasive.  It is unclear whether the Federal Circuit will refuse to adopt this counterintuitive position or whether Congress will amend the AIA to limit estoppel to failed invalidity claims.  As a result, a better and more permanent way to eliminate duplicative litigation would be for Congress to enact the Hatch-Waxman Integrity Act of 2019 (HWIA).  The HWIA was introduced by Senator Thom Tillis in the Senate and Congressman Bill Flores In the House, and proposed in the last Congress by Senator Orrin Hatch.  The HWIA eliminates the ability of drug patent challengers to file duplicative claims in both federal court and IPR proceedings.  Instead, they must choose between either district court litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR proceedings (which are faster and provide certain pro-challenger provisions). 

Thus, the HWIA would reduce duplicative litigation that increases costs and uncertainty for drug patent owners.   This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and ensure that consumers continue to have access to life-improving drugs.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California.

This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]

[Froeb, Doane & Shor: This post does not attempt to answer the question of what the court should decide in FTC v. Qualcomm because we do not have access to the information that would allow us to make such a determination. Rather, we focus on economic issues confronting the court by drawing heavily from our writings in this area: Gregory Werden & Luke Froeb, Why Patent Hold-Up Does Not Violate Antitrust Law; Luke Froeb & Mikhael Shor, Innovators, Implementors and Two-sided Hold-up; Bernard Ganglmair, Luke Froeb & Gregory Werden, Patent Hold Up and Antitrust: How a Well-Intentioned Rule Could Retard Innovation.]

Not everything is “hold-up”

It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.

First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.

At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.

The smallest component “principle”

In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).

So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”

What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.

In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:

For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.

The role of economics

Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.

Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.

Concluding policy thoughts

What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.

In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.

Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.

Last week, Senator Orrin Hatch, Senator Thom Tillis, and Representative Bill Flores introduced the Hatch-Waxman Integrity Act of 2018 (HWIA) in both the Senate and the House of Representatives.  If enacted, the HWIA would help to ensure that the unbalanced inter partes review (IPR) process does not stifle innovation in the drug industry and jeopardize patients’ access to life-improving drugs.

Created under the America Invents Act of 2012, IPR is a new administrative pathway for challenging patents. It was, in large part, created to fix the problem of patent trolls in the IT industry; the trolls allegedly used questionable or “low quality” patents to extort profits from innovating companies.  IPR created an expedited pathway to challenge patents of dubious quality, thus making it easier for IT companies to invalidate low quality patents.

However, IPR is available for patents in any industry, not just the IT industry.  In the market for drugs, IPR offers an alternative to the litigation pathway that Congress created over three decades ago in the Hatch-Waxman Act. Although IPR seemingly fixed a problem that threatened innovation in the IT industry, it created a new problem that directly threatened innovation in the drug industry. I’ve previously published an article explaining why IPR jeopardizes drug innovation and consumers’ access to life-improving drugs. With Hatch-Waxman, Congress sought to achieve a delicate balance between stimulating innovation from brand drug companies, who hold patents, and facilitating market entry from generic drug companies, who challenge the patents.  However, IPR disrupts this balance as critical differences between IPR proceedings and Hatch-Waxman litigation clearly tilt the balance in the patent challengers’ favor. In fact, IPR has produced noticeably anti-patent results; patents are twice as likely to be found invalid in IPR challenges as they are in Hatch-Waxman litigation.

The Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts in Hatch-Waxman proceedings. In federal court, patents are presumed valid and challengers must prove each patent claim invalid by “clear and convincing evidence.” In IPR proceedings, no such presumption of validity applies and challengers must only prove patent claims invalid by the “preponderance of the evidence.”

Moreover, whereas patent challengers in district court must establish sufficient Article III standing, IPR proceedings do not have a standing requirement.  This has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific pre-filing settlement demands.  The lack of a standing requirement has also led to the  exploitation of the IPR process by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet.

Finally, patent owners are often forced into duplicative litigation in both IPR proceedings and federal court litigation, leading to persistent uncertainty about the validity of their patents.  Many patent challengers that are unsuccessful in invalidating a patent in district court may pursue subsequent IPR proceedings challenging the same patent, essentially giving patent challengers “two bites at the apple.”  And if the challenger prevails in the IPR proceedings (which is easier to do given the lower standard of proof), the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision.

The pro-challenger bias in IPR creates significant uncertainty for patent rights in the drug industry.  As an example, just last week patent claims for drugs generating $6.5 billion for drug company Sanofi were invalidated in an IPR proceeding.  Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And, if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

The HWIA, which applies only to the drug industry, is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It eliminates challengers’ ability to file duplicative claims in both federal court and through the IPR process. Instead, they must choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR (which is faster and provides certain pro-challenger provisions). In addition to eliminating generic challengers’ “second bite of the apple,” the HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.

Thus, if enacted, the HWIA would create incentives that reestablish Hatch-Waxman litigation as the standard pathway for generic challenges to brand patents.  Yet, it would preserve IPR proceedings as an option when speed of resolution is a primary concern.  Ultimately, it will restore balance to the drug industry to safeguard competition, innovation, and patients’ access to life-improving drugs.

An important but unheralded announcement was made on October 10, 2018: The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) released a draft CEN CENELAC Workshop Agreement (CWA) on the licensing of Standard Essential Patents (SEPs) for 5G/Internet of Things (IoT) applications. The final agreement, due to be published in early 2019, is likely to have significant implications for the development and roll-out of both 5G and IoT applications.

CEN and CENELAC, which along with the European Telecommunications Standards Institute (ETSI) are the officially recognized standard setting bodies in Europe, are private international non profit organizations with a widespread network consisting of technical experts from industry, public administrations, associations, academia and societal organizations. This first Workshop brought together representatives of the 5G/Internet of Things (IoT) technology user and provider communities to discuss licensing best practices and recommendations for a code of conduct for licensing of SEPs. The aim was to produce a CWA that reflects and balances the needs of both communities.

The final consensus outcome of the Workshop will be published as a CEN-CENELEC Workshop Agreement (CWA). The draft, which is available for public comments, comprises principles and guidelines that prepare a foundation for future licensing of standard essential patents for fifth generation (5G) technologies. The draft also contains a section on Q&A to help aid new implementers and patent holders.

The IoT ecosystem is likely to have over 20 billion interconnected devices by 2020 and represent a market of $17 trillion (about the same as the current GDP of the U.S.). The data collected by one device, such as a smart thermostat that learns what time the consumer is likely to be at home, can be used to increase the performance of another connected device, such as a smart fridge. Cellular technologies are a core component of the IoT ecosystem, alongside applications, devices, software etc., as they provide connectivity within the IoT system. 5G technology, in particular, is expected to play a key role in complex IoT deployments, which will transcend the usage of cellular networks from smart phones to smart home appliances, autonomous vehicles, health care facilities etc. in what has been aptly described as the fourth industrial revolution.

Indeed, the role of 5G to IoT is so significant that the proposed $117 billion takeover bid for U.S. tech giant Qualcomm by Singapore-based Broadcom was blocked by President Trump, citing national security concerns. (A letter sent by the Committee on Foreign Investment in the US suggested that Broadcom might starve Qualcomm of investment, preventing it from competing effectively against foreign competitors–implicitly those in China.)

While commercial roll-out of 5G technology has not yet fully begun, several efforts are being made by innovator companies, standard setting bodies and governments to maximize the benefits from such deployment.

The draft CWA Guidelines (hereinafter “the guidelines”) are consistent with some of the recent jurisprudence on SEPs on various issues. While there is relatively less guidance specifically in relation to 5G SEPs, it provides clarifications on several aspects of SEP licensing which will be useful, particularly, the negotiating process and conduct of both parties.

The guidelines contain 6 principles followed by some questions pertaining to SEP licensing. The principles deal with:

  1. The obligation of SEP holders to license the SEPs on Fair, Reasonable and Non-Discriminatory (FRAND) terms;
  2. The obligation on both parties to conduct negotiations in good faith;
  3. The obligation of both parties to provide necessary information (subject to confidentiality) to facilitate timely conclusion of the licensing negotiation;
  4. Compensation that is “fair and reasonable” and achieves the right balance between incentives to contribute technology and the cost of accessing that technology;
  5. A non-discriminatory obligation on the SEP holder for similarly situated licensees even though they don’t need to be identical; and
  6. Recourse to a third party FRAND determination either by court or arbitration if the negotiations fail to conclude in a timely manner.

There are 22 questions and answers, as well, which define basic terms and touch on issues such as: what amounts as good faith conduct of negotiating parties, global portfolio licensing, FRAND royalty rates, patent pooling, dispute resolution, injunctions, and other issues relevant to FRAND licensing policy in general.

Below are some significant contributions that the draft report makes on issues such as the supply chain level at which licensing is best done, treatment of small and medium enterprises (SMEs), non disclosure agreements, good faith negotiations and alternative dispute resolution.

Typically in the IoT ecosystem, many technologies will be adopted of which several will be standardized. The guidelines offer help to product and service developers in this regard and suggest that one may need to obtain licenses from SEP owners for product or services incorporating communications technology like 3G UMTS, 4G LTE, Wi-Fi, NB-IoT, 31 Cat-M or video codecs such as H.264. The guidelines, however, clarify that with the deployment of IoT, licenses for several other standards may be needed and developers should be mindful of these complexities when starting out in order to avoid potential infringements.

Notably, the guidelines suggest that in order to simplify licensing, reduce costs for all parties and maintain a level playing field between licensees, SEP holders should license at one level. While this may vary between different industries, for communications technology, the licensing point is often at the end-user equipment level. There has been a fair bit of debate on this issue and the recent order by Judge Koh granting FTC’s partial summary motion deals with some of this.

In the judgment delivered on November 6, Judge Koh relied primarily on the 9th circuit decisions in Microsoft v Motorola (2012 and 2015)  to rule on the core issue of the scope of the FRAND commitments–specifically on the question of whether licensing extends to all levels or is confined to the end device level. The court interpreted the pro- competitive principles behind the non-discrimination requirement to mean that such commitments are “sweeping” and essentially that an SEP holder has to license to anyone willing to offer a FRAND rate globally. It also cited Ericsson v D-Link, where the Federal Circuit held that “compliant devices necessarily infringe certain claims in patents that cover technology incorporated into the standard and so practice of the standard is impossible without licenses to all incorporated SEP technology.”

The guidelines speak about the importance of non-disclosure agreements (NDAs) in such licensing agreements given that some of the information exchanged between parties during negotiation, such as claim charts etc., may be sensitive and confidential. Therefore, an undue delay in agreeing to an NDA, without well-founded reasons, might be taken as evidence of a lack of good faith in negotiations rendering such a licensee as unwilling.

They also provide quite a boost for small and medium enterprises (SMEs) in licensing negotiations by addressing the duty of SEP owners to be mindful of SMEs that may be less experienced and therefore lack information from which to draw assurance that proposed terms are FRAND. The guidelines provide that SEP owners should provide whatever information they can under NDA to help the negotiation process. Equally, the same obligation applies on a licensee who is more experienced in dealing with a SEP owner who is an SME.

There is some clarity on time frames for negotiations and the guidelines provide a maximum time that parties should take to respond to offers and counter offers, which could extend up to several months in complex cases involving hundreds of patents. The guidelines also prescribe conduct of potential licensees on receiving an offer and how to make counter-offers in a timely manner.

Furthermore, the guidelines lay down the various ways in which royalty rates may be structured and clarify that there is no one fixed way in which this may be done. Similarly, they offer myriad ways in which potential licensees may be able to determine for themselves if the rates offered to them are fair and reasonable, such as third party patent landscape reports, public announcements, expert advice etc.

Finally, in the case that a negotiation reaches an impasse, the guidelines endorse an alternative dispute mechanism such as mediation or arbitration for the parties to resolve the issue. Bodies such as International Chamber of Commerce and World Intellectual Property Organization may provide useful platforms in this regard.

Almost 20 years have passed since technology pioneer Kevin Ashton first coined the phrase Internet of Things. While companies are gearing up to participate in the market of IoT, regulation and policy in the IoT world seems far from a predictable framework to follow. There are a lot of guesses about how rules and standards are likely to shape up, with little or no guidance for companies on how to prepare themselves for what faces them very soon. Therefore concrete efforts such as these are rather welcome. The draft guidelines do attempt to offer some much needed clarity and are now open for public comments due by December 13. It will be good to see what the final CWA report on licensing of SEPs for 5G and IoT looks like.

 

Last week, the UK Court of Appeal upheld the findings of the High Court in an important case regarding standard essential patents (SEPs). Of particular significance, the Court of Appeal upheld the finding that the defendant, an implementer of SEPs, could have the sale of its products enjoined in the UK unless it enters into a global licensing deal on terms deemed by the court to be fair, reasonable and non-discriminatory (FRAND). The case is noteworthy not least because the threat of an injunction of this sort has become increasingly rare in other jurisdictions, arguably resulting in an imbalance in bargaining power between patent holders and implementers.

The case concerned patents held by Unwired Planet (most of which had been purchased from Ericsson) that it had declared to be essential to the operation of various telecommunications standards. Chinese telecom giant Huawei had incorporated these patented technologies in its products but disputed the legitimacy of Unwired Planet’s (UP) patents and refused to license them on the terms that were offered.

By way of a background to the case, in March 2014, UP resorted to suing Huawei, Samsung and Google and claiming an injunction when it found it hard to secure licenses. After the commencement of proceedings, UP made licence offers to the defendants. It made offers in April and July 2014 respectively and during the proceedings, including a worldwide SEP portfolio licence, a UK SEP portfolio licence and per-patent licences for any of the SEPs in suit. The defendants argued that the offers were not FRAND. Huawei and Samsung also contended that the offers were in breach of European competition law. UP  settled with Google. Three technical trials of the patents began and UP was able to show that at least two of the patents sued upon were valid and essential and had been infringed. Subsequently, Samsung secured a settlement (at a rate below the market rate) and the FRAND trial went ahead with just Huawei.

Judge Birss delivered the High Court order on April 5, 2017. He held that UP’s patents were valid and infringed and it did not abuse its dominant position by requesting an injunction. He ordered a FRAND injunction that was stayed pending appeal against the two patents that had been infringed. The injunction was subject to a number of conditions which are applied because the case was dealing with patents subject to a FRAND undertaking. It will cease to have effect if Huawei enters into the FRAND license determined by the Court. He also observed that the parties can return for further determination when such license expires. Furthermore, it was held that there was one set of FRAND terms and that the scope of this FRAND was world wide.

The UK Court of Appeal (the bench consisting of Lord Justice Kitchin, Lord Justice Floyd, Lady Justice Asplin) in handing down a 291 paragraph, 66 page judgment dealing with Huawei’s appeal, upheld Birss’ findings. The centrality of Huawei’s appeal focused on the global nature of the FRAND license and the non-discrimination undertaking of UP’s FRAND commitments. Some significant findings of the Court of Appeal are briefly provided below.

The Court of Appeal in upholding Birss’ decision noted that it was unfair to say that UP is using the threat of an injunction to leverage Huawei into taking a global license, and that Huawei had the option to take the global license or submit to an injunction in the UK. Drawing attention to the potential complexities in a FRAND negotiation, the Court observed:

..The owner of a SEP may still use the threat of an injunction to try to secure the payment of excessive licence fees and so engage in hold-up activities. Conversely, the infringer may refuse to engage constructively or behave unreasonably in the negotiation process and so avoid paying the licence fees to which the SEP owner is properly entitled, a process known as “hold-out”.

Furthermore, Huawei argues that imposition of a global license on terms set by a national court based on a national finding of infringement is wrong in principle. It also states that there is currently an ongoing patent litigation in both Germany and China and that there are some countries where UP holds “no relevant” patents at all.

In response to these contentions, the Court of Appeal has held that it may be highly impractical for a SEP owner to seek to negotiate a license of its patent rights in each country and rejected the submission made by Huawei that the approach adopted by Birss in these proceedings is out of line with the territorial nature of patent litigations. It clarified that Birss did not adjudicate on issues of infringement or validity concerning foreign SEPs and did not usurp the rights of foreign courts. It further observed that such an approach of Birss  is consistent with the Council and the European Economic and Social Committee dated 29 November 2017 (COM (2017) 712 final) (“the November 2017 EU Communication”) which notes in section 2.4:

For products with a global circulation, SEP licences granted on a worldwide basis may contribute to a more efficient approach and therefore can be compatible with FRAND.

The Court of Appeal however disagreed with Birss on the issue that there was only one set of FRAND terms. This view of the bench certainly comes as a relief since it seems to appropriately reflect the practical realities of a FRAND negotiation. The Court held:

Patent licences are complex and, having regard to the commercial priorities of the participating undertakings and the experience and preferences of the individuals involved, may be structured in different ways in terms of, for example, the particular contracting parties, the rights to be included in the licence, the geographical scope of the licence, the products to be licensed, royalty rates and how they are to be assessed, and payment terms. Further, concepts such as fairness and reasonableness do not sit easily with such a rigid approach.

Similarly, on the non- discrimination prong of FRAND, the Court of Appeal agreed with Birss that it was not “hard-edged” and the test is whether such difference in rates distorts competition between the licensees. It also noted that the “hard-edged” interpretation would be “akin to the re-insertion of a “most favoured licensee” clause in the FRAND undertaking” which does not seem to be what the standards body, European Telecommunications Standards Institute (ETSI) had in mind when it formulated its policies. The Court also held :

We consider that a non-discrimination rule has the potential to harm the technological development of standards if it has the effect of compelling the SEP owner to accept a level of compensation for the use of its invention which does not reflect the value of the licensed technology.

Finally, the Court of Appeal held that UP did not abuse its dominant position just because it failed to strictly comply with the safe harbor framework laid down by Court of Justice of the European Union in Huawei v. ZTE. The only requirement that must be satisfied before proceedings are commenced by the SEP holder is that the SEP holder give sufficient notice to or consult with the implementer.

The Court of Appeal’s decision offers some significant guidance to the emerging policy debate on FRAND. As mentioned at the beginning of this post, the decision is significant particularly for the reason that UP is one of a total of two cases in the last two years, where an injunctive relief has been granted in instances involving standard essential patents. Such reliefs have been rarely granted in years in the first place. The second such instance of a grant of injunction pertains to Huawei v. Samsung where the Shenzhen Court in China held earlier this year that Huawei met the FRAND obligation while Samsung did not (negotiations were dragged on for 6 years). An injunction was granted against Samsung for infringing two of Huawei’s Chinese patents which are counterparts of two U.S. asserted patents (however Judge Orrick of the U.S. District Court for the Northern District of California enjoined Huawei from enforcing the injunction).

Current jurisprudence on injunctive relief with respect to FRAND encumbered SEPs is that there is no per se ban on these reliefs. However, courts have been very reluctant to actually grant them. While injunctions are statutory remedies, and granted automatically in most cases when a patent is found to be infringed, administrative agencies and courts have held a position that shows that FRAND commitments certainly limit this premise.

Following the eBay decision in the U.S., defendants in infringement claims involving SEPs have argued that permanent injunctions should not be available for FRAND-encumbered SEPs and were upheld in cases such as Apple v. Motorola in 2014 (where Judge Randall Radar also makes a sound case for evidence of a hold out by Apple in his dissenting order). However, in an institutional bargaining framework of FRAND, which is based on a mutuality of considerations, such a recourse is misplaced and likely to inevitably disturb this balance. The current narrative on FRAND that dominates policymaking and jurisprudence is incomplete in its unilateral focus of avoiding the possible problem of a patent hold up in the absence of concrete evidence indicating its probability. In Ericsson v D-Links Judge Davis of the US Court of Appeals for the Federal Circuit underscored this point when he observed that “if an accused infringer wants an instruction on patent hold-up and royalty stacking [to be given to the jury], it must provide evidence on the record of patent hold-up and royalty stacking.”

Remedies emanating from a one sided perspective tilt the bargaining dynamic in favour of implementers and if the worst penalty a SEP infringer has to pay is the FRAND royalty it would have otherwise paid beforehand, then a hold out or a reverse hold up by implementers becomes a very profitable strategy. Remedies for patent infringement cannot be ignored because they are also core to the framework for licensing negotiations and ensuring compliance by licensees. A disproportionate reliance on liability rules over property rights is likely to exacerbate the countervailing problem of hold out and detrimentally impact incentives to innovate, ultimately undermining the welfare goals that such enforcement seeks to achieve.

The Court of Appeal has therefore given valuable guidance in its decision when it noted:

Just as implementers need protection, so too do the SEP owners. They are entitled to an appropriate reward for carrying out their research and development activities and for engaging with the standardization process, and they must be able to prevent technology users from free-riding on their innovations. It is therefore important that implementers engage constructively in any FRAND negotiation and, where necessary, agree to submit to the outcome of an appropriate FRAND determination.

Hopefully this order brings with it some balance in FRAND negotiations as well as a shift in the perspective of courts in how they adjudicate on these litigations. It underscores an oft forgotten principle that is core to the FRAND framework- that FRAND is a two-way street, as was observed in the celebrated case of Huawei v. ZTE in 2015.

On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.

At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.

The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:

  • It has been engaging in “exclusionary conduct”  that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.  
  • Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
  • The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
  • Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.

The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:

[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.

Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court  found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.

It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.

(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)

On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.

In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):

  1. The Telecommunications Industry Association (TIA) and
  2. The Alliance for Telecommunications Industry Solutions (ATIS).

These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.

The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for  extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.”

A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.

Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.

The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.

Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.

In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.

These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.

An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”

It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”  

Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.

The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”

Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:

[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.

Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.

The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.  As discussed, the policies of ATIS and TIA are unclear on this.

In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidence on what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

The Internet is a modern miracle: from providing all varieties of entertainment, to facilitating life-saving technologies, to keeping us connected with distant loved ones, the scope of the Internet’s contribution to our daily lives is hard to overstate. Moving forward there is undoubtedly much more that we can and will do with the Internet, and part of that innovation will, naturally, require a reconsideration of existing laws and how new Internet-enabled modalities fit into them.

But when undertaking such a reconsideration, the goal should not be simply to promote Internet-enabled goods above all else; rather, it should be to examine the law’s effect on the promotion of new technology within the context of other, competing social goods. In short, there are always trade-offs entailed in changing the legal order. As such, efforts to reform, clarify, or otherwise change the law that affects Internet platforms must be balanced against other desirable social goods, not automatically prioritized above them.

Unfortunately — and frequently with the best of intentions — efforts to promote one good thing (for instance, more online services) inadequately take account of the balance of the larger legal realities at stake. And one of the most important legal realities that is too often readily thrown aside in the rush to protect the Internet is that policy be established through public, (relatively) democratically accountable channels.

Trade deals and domestic policy

Recently a letter was sent by a coalition of civil society groups and law professors asking the NAFTA delegation to incorporate U.S.-style intermediary liability immunity into the trade deal. Such a request is notable for its timing in light of the ongoing policy struggles over SESTA —a bill currently working its way through Congress that seeks to curb human trafficking through online platforms — and the risk that domestic platform companies face of losing (at least in part) the immunity provided by Section 230 of the Communications Decency Act. But this NAFTA push is not merely about a tradeoff between less trafficking and more online services, but between promoting policies in a way that protects the rule of law and doing so in a way that undermines the rule of law.

Indeed, the NAFTA effort appears to be aimed at least as much at sidestepping the ongoing congressional fight over platform regulation as it is aimed at exporting U.S. law to our trading partners. Thus, according to EFF, for example, “[NAFTA renegotiation] comes at a time when Section 230 stands under threat in the United States, currently from the SESTA and FOSTA proposals… baking Section 230 into NAFTA may be the best opportunity we have to protect it domestically.”

It may well be that incorporating Section 230 into NAFTA is the “best opportunity” to protect the law as it currently stands from efforts to reform it to address conflicting priorities. But that doesn’t mean it’s a good idea. In fact, whatever one thinks of the merits of SESTA, it is not obviously a good idea to use a trade agreement as a vehicle to override domestic reforms to Section 230 that Congress might implement. Trade agreements can override domestic law, but that is not the reason we engage in trade negotiations.

In fact, other parts of NAFTA remain controversial precisely for their ability to undermine domestic legal norms, in this case in favor of guaranteeing the expectations of foreign investors. EFF itself is deeply skeptical of this “investor-state” dispute process (“ISDS”), noting that “[t]he latest provisions would enable multinational corporations to undermine public interest rules.” The irony here is that ISDS provides a mechanism for overriding domestic policy that is a close analogy for what EFF advocates for in the Section 230/SESTA context.

ISDS allows foreign investors to sue NAFTA signatories in a tribunal when domestic laws of that signatory have harmed investment expectations. The end result is that the signatory could be responsible for paying large sums to litigants, which in turn would serve as a deterrent for the signatory to continue to administer its laws in a similar fashion.

Stated differently, NAFTA currently contains a mechanism that favors one party (foreign investors) in a way that prevents signatory nations from enacting and enforcing laws approved of by democratically elected representatives. EFF and others disapprove of this.

Yet, at the same time, EFF also promotes the idea that NAFTA should contain a provision that favors one party (Internet platforms) in a way that would prevent signatory nations from enacting and enforcing laws like SESTA that (might be) approved of by democratically elected representatives.

A more principled stance would be skeptical of the domestic law override in both contexts.

Restating Copyright or creating copyright policy?

Take another example: Some have suggested that the American Law Institute (“ALI”) is being used to subvert Congressional will. Since 2013, ALI has taken upon itself the project to “restate” the law of copyright. ALI is well known and respected for its common law restatements, but it may be that something more than mere restatement is going on here. As the NY Bar Association recently observed:

The Restatement as currently drafted appears inconsistent with the ALI’s long-standing goal of promoting clarity in the law: indeed, rather than simply clarifying or restating that law, the draft offers commentary and interpretations beyond the current state of the law that appear intended to shape current and future copyright policy.  

It is certainly odd that ALI (or any other group) would seek to restate a body of law that is already stated in the form of an overarching federal statute. The point of a restatement is to gather together the decisions of disparate common law courts interpreting different laws and precedent in order to synthesize a single, coherent framework approximating an overall consensus. If done correctly, a restatement of a federal statute would, theoretically, end up with the exact statute itself along with some commentary about how judicial decisions have filled in the blanks differently — a state of affairs that already exists with the copious academic literature commenting on federal copyright law.

But it seems that merely restating judicial interpretations was not the only objective behind the copyright restatement effort. In a letter to ALI, one of the scholars responsible for the restatement project noted that:

While congressional efforts to improve the Copyright Act… may be a welcome and beneficial development, it will almost certainly be a long and contentious process… Register Pallante… [has] not[ed] generally that “Congress has moved slowly in the copyright space.”

Reform of copyright law, in other words, and not merely restatement of it, was an important impetus for the project. As an attorney for the Copyright Office observed, “[a]lthough presented as a “Restatement” of copyright law, the project would appear to be more accurately characterized as a rewriting of the law.” But “rewriting” is a job for the legislature. And even if Congress moves slowly, or the process is frustrating, the democratic processes that produce the law should still be respected.

Pyrrhic Policy Victories

Attempts to change copyright or entrench liability immunity through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.

It’s no surprise why some may be frustrated and concerned about intermediary liability and copyright issues: On the margin, it’s definitely harder to operate an Internet platform if it faces sweeping liability for the actions of third parties (whether for human trafficking or infringing copyrights). Maybe copyright law needs to be reformed and perhaps intermediary liability must be maintained exactly as it is (or expanded). But the right way to arrive at these policy outcomes is not through backdoors — and it is not to begin with the assertion that such outcomes are required.

Congress and the courts can be frustrating vehicles through which to enact public policy, but they have the virtue of being relatively open to public deliberation, and of having procedural constraints that can circumscribe excesses and idiosyncratic follies. We might get bad policy from Congress. We might get bad cases from the courts. But the theory of our system is that, on net, having a frustratingly long, circumscribed, and public process will tend to weed out most of the bad ideas and impulses that would otherwise result from unconstrained decision making, even if well-intentioned.

We should meet efforts like these to end-run Congress and the courts with significant skepticism. Short term policy “victories” are likely not worth the long-run consequences. These are important, complicated issues. If we surreptitiously adopt idiosyncratic solutions to them, we risk undermining the rule of law itself.

Last week, several major drug makers marked the new year by announcing annual increases on list prices.  In addition to drug maker Allergan—which pledged last year to confine price increases below 10 percent and, true to its word, reported 2018 price increases of 9.5 percent—several other companies also stuck to single-digit increases.   Although list or “sticker” prices generally increased by around 9 percent for most drugs, after discounts negotiated with various health plans, the net prices that consumers and insurers actually pay will see much lower increases. For example, Allergan expects that payors will only see net price increases of 2 to 3 percent in 2018.

However, price increases won’t generate the same returns for brand drug companies that they once did.  As insurers and pharmacy benefit managers consolidate and increase their market share, they have been able to capture an increasing share of the money spent on drugs for themselves. Indeed, a 2017 report found that, of the money spent on prescription drugs by patients and health plans at the point of sale, brand drug makers only realized 39 percent.  Meanwhile, supply-chain participants, such as pharmacy benefit managers, realized 42 percent of these expenditures.  What’s more, year-after-year, brand drug makers have seen their share of these point-of-sale expenditures decrease while supply-chain entities have kept a growing share of expenditures for themselves.

Brand drug makers have also experienced a dramatic decline in the return on their R&D investment.  A recent Deloitte study reports that, for the large drug makers they’ve followed since 2010, R&D returns have dropped from over 10 percent to under 4 percent for the last two years.  The ability of supply-chain entities to capture an increasing share of drug expenditures is responsible for at least part of drug makers’ decreasing R&D returns; the study reports that average peak sales for drugs have slowly dropped over time, mirroring drug maker’s decreasing share of expenditures.  In addition, the decline in R&D returns can be traced to the increasing cost of bringing drugs to market; for the companies Deloitte studied, the cost to bring a drug to market has increased from just over $1.1 billion in 2010 to almost $2 billion in 2017.

Brand drug makers’ decreasing share of drug expenditures and declining R&D returns reduce incentives to innovate.  As the payoff from innovation declines, fewer companies will devote the substantial resources necessary to develop innovative new drugs.  In addition, innovation is threatened as brand companies increasingly face uncertainty about the patent rights of the drugs they do bring to market.  As I’ve discussed in a previous post,  the unbalanced inter partes review (IPR) process created under the Leahy-Smith America Invents Act in 2012 has led to significantly higher patent invalidation rates.  Compared to traditional district-court litigation, several pro-challenger provisions under IPR—including a lower standard of proof, a broader claim construction standard, and the ability of patent challengers to force patent owners into duplicative litigation—have resulted in twice as many patents deemed invalid in IPR proceedings.  Moreover, the lack of a standing requirement in IPR proceedings has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific settlement demands.  Even supporters of IPR proceedings recognize the flaws with the system; as Senator Orrin Hatch stated in a 2017 speech: “Such manipulation is contrary to the intent of IPR and the very purpose of intellectual property law. . . I think Congress needs to take a look at it.” Although the constitutionality of the IPR process is currently under review by the U.S. Supreme Court, if the unbalanced process remains unchanged, the significant uncertainty it creates for drug makers’ patent rights will lead to less innovation in the pharmaceutical industry.  Drug makers will have little incentive to spend billions of dollars to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.

We are likely to see a renewed push for drug pricing reforms in 2018 as access to affordable drugs remains a top policy priority.  Although Congress has yet to come together in support of any specific proposal, several states are experimenting with reforms that aim to lower drug prices by requiring more pricing transparency and notice of price increases.  As lawmakers consider these and other reforms, they should consider the current challenges that drug makers already face as their share of drug expenditures and R&D returns decline and patent rights remain uncertain.  Reforms that further threaten drug makers’ financial incentives to innovate could reduce our access to life-saving and life-improving new drugs.

Introduction and Summary

On December 19, 2017, the U.S. Court of Appeals for the Second Circuit presented Broadcast Music, Inc. (BMI) with an early Christmas present.  Specifically, the Second Circuit commendably affirmed the District Court for the Southern District of New York’s September 2016 ruling rejecting the U.S. Department of Justice’s (DOJ) August 2016 reinterpretation of its longstanding antitrust consent decree with BMI.  Because the DOJ reinterpretation also covered a parallel DOJ consent decree with the American Society of Composers, Authors, and Publishers (ASCAP), the Second Circuit’s decision by necessary implication benefits ASCAP as well, although it was not a party to the suit.

The Second Circuit’s holding is sound as a matter of textual interpretation and wise as a matter of economic policy.  Indeed, DOJ’s current antitrust leadership, which recognizes the importance of vibrant intellectual property licensing in the context of patents (see here), should be pleased that the Second Circuit rescued it from a huge mistake by the Obama Administration DOJ in the context of copyright licensing.

Background

BMI and ASCAP are the two leading U.S. “performing rights organizations” (PROs).  They contract with music copyright holders to act as intermediaries that provide “blanket” licenses to music users (e.g., television and radio stations, bars, and internet music distributors) for use of their full copyrighted musical repertoires, without the need for song-specific licensing negotiations.  This greatly reduces the transactions costs of arranging for the playing of musical works, benefiting music users, the listening public, and copyright owners (all of whom are assured of at least some compensation for their endeavors).  ASCAP and BMI are big businesses, with each PRO holding licenses to over ten million works and accounting for roughly 45 percent of the domestic music licensing market (ninety percent combined).

Because both ASCAP and BMI pool copyrighted songs that could otherwise compete with each other, and both grant users a single-price “blanket license” conveying the rights to play their full set of copyrighted works, the two organizations could be seen as restricting competition among copyrighted works and fixing the prices of copyrighted substitutes – raising serious questions under section 1 of the Sherman Antitrust Act, which condemns contracts that unreasonably restrain trade.  This led the DOJ to bring antitrust suits against ASCAP and BMI over eighty years ago, which were settled by separate judicially-filed consent decrees in 1941.

The decrees imposed a variety of limitations on the two PROs’ licensing practices, aimed at preventing ASCAP and BMI from exercising anticompetitive market power (such as the setting of excessive licensing rates).  The decrees were amended twice over the years, most recently in 2001, to take account of changing market conditions.  The U.S. Supreme Court noted the constraining effect of the decrees in BMI v. CBS (1979), in ruling that the BMI and ASCAP blanket licenses did not constitute per se illegal price fixing.  The Court held, rather, that the licenses should be evaluated on a case-by-case basis under the antitrust “rule of reason,” since the licenses inherently generated great efficiency benefits (“the immediate use of covered compositions, without the delay of prior individual negotiations”) that had to be weighed against potential anticompetitive harms.

The August 4, 2016 DOJ Consent Decree Interpretation

Fast forward to 2014, when DOJ undertook a new review of the ASCAP and BMI decrees, and requested the submission of public comments to aid it in its deliberations.  This review came to an official conclusion two years later, on August 4, 2016, when DOJ decided not to amend the decrees – but announced a decree interpretation that limits ASCAP’s and BMI’s flexibility.  Specifically, DOJ stated that the decrees needed to be “more consistently applied.”  By this, the DOJ meant that BMI and ASCAP should only grant blanket licenses that cover all of the rights to 100 percent of the works in the PROs’ respective catalogs (“full-work licensing”), not licenses that cover only partial interests in those works.  DOJ stated:

Only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws.

The New DOJ Interpretation Was Bad as a Matter of Policy

DOJ’s August 4 interpretation rejected industry practice.  Under it, ASCAP and BMI were only allowed to offer a license covering all of the copyright interests in a musical competition, even if the license covers a joint work.

For example, consider a band of five composer-musicians, each of whom has a fractional interest in the copyright covering the band’s new album which is a joint work.  Prior to the DOJ’s new interpretation, each musician was able to offer a partial interest in the joint work to a performance rights organization, reflecting the relative shares of the total copyright interest covering the work.  The organization could offer a partial license, and a user could aggregate different partial licenses in order to cover the whole joint work.  Following the new interpretation, however, BMI and ASCAP could not offer partial licenses to that work to users.  This denied the band’s individual members the opportunity to deal profitably with BMI and ASCAP, thereby undermining their ability to receive fair compensation.

As the two PROs warned, this approach, if upheld, would “cause unnecessary chaos in the marketplace and place unfair financial burdens and creative constraints on songwriters and composers.”  According to ASCAP President Paul Williams, “It is as if the DOJ saw songwriters struggling to stay afloat in a sea of outdated regulations and decided to hand us an anchor, in the form of 100 percent licensing, instead of a life preserver.”  Furthermore, the president and CEO of BMI, Mike O’Neill, stated:  “We believe the DOJ’s interpretation benefits no one – not BMI or ASCAP, not the music publishers, and not the music users – but we are most sensitive to the impact this could have on you, our songwriters and composers.”

The PROs’ views were bolstered by a January 2016 U.S. Copyright Office report, which concluded that “an interpretation of the consent decrees that would require 100-percent licensing or removal of a work from the ASCAP or BMI repertoire would appear to be fraught with legal and logistical problems, and might well result in a sharp decrease in repertoire available through these [performance rights organizations’] blanket licenses.”  Regrettably, during the decree review period, DOJ ignored the expert opinion of the Copyright Office, as well as the public record comments of numerous publishers and artists (see here, for example) indicating that a 100 percent licensing requirement would depress returns to copyright owners and undermine the creative music industry.

Most fundamentally, DOJ’s new interpretation of the BMI and ASCAP consent decrees involved an abridgment of economic freedom.  It further limited the flexibility of copyright music holders and music users to contract with intermediaries to promote the efficient distribution of music performance rights, in a manner that benefits the listening public while allowing creative artists sufficient compensation for their efforts.  DOJ made no compelling showing that a new consent decree constraint was needed to promote competition (100 percent licensing only).  Far from promoting competition, DOJ’s new interpretation undermined it.  DOJ micromanagement of copyright licensing by consent decree reinterpretation was a costly new regulatory initiative that reflected a lack of appreciation for intellectual property rights, which incentivize innovation.  In short, DOJ’s latest interpretation of the ASCAP and BMI decrees was terrible policy.

The New DOJ Interpretation Ran Counter to International Norms

The new DOJ interpretation had unfortunate international policy implications as well.  According to Gadi Oron, Director General of the International Confederation of Societies of Authors and Composers (CISAC), a Paris-based organization that regroups 239 rights societies from 123 countries, including ASCAP, BMI, and SESAC, the new interpretation departed from international norms in the music licensing industry and have disruptive international effects:

It is clear that the DoJ’s decisions have been made without taking the interests of creators, neither American nor international, into account. It is also clear that they were made with total disregard for the international framework, where fractional licensing is practiced, even if it’s less of a factor because many countries only have one performance rights organization representing songwriters in their territory. International copyright laws grant songwriters exclusive rights, giving them the power to decide who will license their rights in each territory and it is these rights that underpin the landscape in which authors’ societies operate. The international system of collective management of rights, which is based on reciprocal representation agreements and founded on the freedom of choice of the rights holder, would be negatively affected by such level of government intervention, at a time when it needs support more than ever.

The New DOJ Interpretation Was Defective as a Matter of Law, and the District Court and the Second Circuit So Held

As I explained in a November 2016 Heritage Foundation commentary (citing arguments made by counsel for BMI), DOJ’s new interpretation not only was bad domestic and international policy, it was inconsistent with sound textual construction of the decrees themselves.  The BMI decree (and therefore the analogous ASCAP decree as well) did not expressly require 100 percent licensing and did not unambiguously prohibit fractional licensing.  Accordingly, since a consent decree is an injunction, and any activity not expressly required or prohibited thereunder is permitted, fractional shares licensing should be authorized.  DOJ’s new interpretation ignored this principle.  It also was at odds with a report of the U.S. Copyright Office that concluded the BMI consent decree “must be understood to include partial interests in musical works.”  Furthermore, the new interpretation was belied by the fact that the PRO licensing market has developed and functioned efficiently for decades by pricing, collecting, and distributing fees for royalties on a fractional basis.  Courts view such evidence of trade practice and custom as relevant in determining the meaning of a consent decree.

The district court for the Southern District of New York accepted these textual arguments in its September 2016 ruling, granting BMI’s request for a declaratory judgment that the BMI decree did not require Decree did not require 100% (“full-work”) licensing.  The court explained:

Nothing in the Consent Decree gives support to the Division’s views. If a fractionally-licensed composition is disqualified from inclusion in BMI’s repertory, it is not for violation of any provision of the Consent Decree. While the Consent Decree requires BMI to license performances of those compositions “the right of public performances of which [BMI] has or hereafter shall have the right to license or sublicense” (Art. II(C)), it contains no provision regarding the source, extent, or nature of that right. It does not address the possibilities that BMI might license performances of a composition without sufficient legal right to do so, or under a worthless or invalid copyright, or users might perform a music composition licensed by fewer than all of its creators. . . .

The Consent Decree does not regulate the elements of the right to perform compositions. Performance of a composition under an ineffective license may infringe an author’s rights under copyright, contract or other law, but it does not infringe the Consent Decree, which does not extend to matters such as the invalidity or value of copyrights of any of the compositions in BMI’s repertory. Questions of the validity, scope and limits of the right to perform compositions are left to the congruent and competing interests in the music copyright market, and to copyright, property and other laws, to continue to resolve and enforce. Infringements (and fractional infringements) and remedies are not part of the Consent Decree’s subject-matter.

The Second Circuit affirmed, agreeing with the district court’s reading of the decree:

The decree does not address the issue of fractional versus full work licensing, and the parties agree that the issue did not arise at the time of the . . . [subsequent] amendments [to the decree]. . . .

This appeal begins and ends with the language of the consent decree. It is a “well-established principle that the language of a consent decree must dictate what a party is required to do and what it must refrain from doing.” Perez v. Danbury Hosp., 347 F.3d 419, 424 (2d Cir. 2003); United States v. Armour & Co., 402 U.S. 673, 682 (1971) (“[T]he scope of a consent decree must be discerned within its four corners…”). “[C]ourts must abide by the express terms of a consent decree and may not impose additional requirements or supplementary obligations on the parties even to fulfill the purposes of the decree more effectively.” Perez, 347 F.3d at 424; see also Barcia v. Sitkin, 367 F.3d 87, 106 (2d Cir. 2004) (internal citations omitted) (The district court may not “impose obligations on a party that are not unambiguously mandated by the decree itself.”). Accordingly, since the decree is silent on fractional licensing, BMI may (and perhaps must) offer them unless a clear and unambiguous command of the decree would thereby be violated. See United States v. Int’l Bhd. Of Teamsters, Chauffeurs, Warehousemen & Helpers of Am., AFLCIO, 998 F.2d 1101, 1107 (2d Cir. 1993); see also Armour, 402 U.S. at 681-82.

Conclusion

The federal courts wisely have put to rest an ill-considered effort by the Obama Antitrust Division to displace longstanding industry practices that allowed efficient flexibility in the licensing of copyright interests by PROs.  Let us hope that the Trump Antitrust Division will not just accept the Second Circuit’s decision, but will positively embrace it as a manifestation of enlightened antitrust-IP policy – one in harmony with broader efforts by the Division to restore sound thinking to the antitrust treatment of patent licensing and intellectual property in general.